950 resultados para NiPAT, code pattern analysis, object-oriented programming languages
Resumo:
Abstract Background Over the last years, a number of researchers have investigated how to improve the reuse of crosscutting concerns. New possibilities have emerged with the advent of aspect-oriented programming, and many frameworks were designed considering the abstractions provided by this new paradigm. We call this type of framework Crosscutting Frameworks (CF), as it usually encapsulates a generic and abstract design of one crosscutting concern. However, most of the proposed CFs employ white-box strategies in their reuse process, requiring two mainly technical skills: (i) knowing syntax details of the programming language employed to build the framework and (ii) being aware of the architectural details of the CF and its internal nomenclature. Also, another problem is that the reuse process can only be initiated as soon as the development process reaches the implementation phase, preventing it from starting earlier. Method In order to solve these problems, we present in this paper a model-based approach for reusing CFs which shields application engineers from technical details, letting him/her concentrate on what the framework really needs from the application under development. To support our approach, two models are proposed: the Reuse Requirements Model (RRM) and the Reuse Model (RM). The former must be used to describe the framework structure and the later is in charge of supporting the reuse process. As soon as the application engineer has filled in the RM, the reuse code can be automatically generated. Results We also present here the result of two comparative experiments using two versions of a Persistence CF: the original one, whose reuse process is based on writing code, and the new one, which is model-based. The first experiment evaluated the productivity during the reuse process, and the second one evaluated the effort of maintaining applications developed with both CF versions. The results show the improvement of 97% in the productivity; however little difference was perceived regarding the effort for maintaining the required application. Conclusion By using the approach herein presented, it was possible to conclude the following: (i) it is possible to automate the instantiation of CFs, and (ii) the productivity of developers are improved as long as they use a model-based instantiation approach.
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.
Resumo:
Today there are many techniques that allows to exploit vulnerabilities of an application; there are also many techniques that are designed to stop these exploit attacks. This thesis wants to highlight how a specific type of attack, based on a technique called Return Oriented Programming (ROP), can be easily applied to binaries with particular characteristics. A new method that allows the injection of "useful" code in an Open Source projects without arousing suspicions is presented; this is possible because of the harmless aspects of the injected code. This useful code facilitate a ROP attack against an executable that contains vulnerable bugs. The injection process can be visualized in environment where an user can contribute with own code to a particular Open Source project. This thesis also highlights how current software protections are not correctly applied to Open Source project, thus enabling the proposed approach.
Resumo:
Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their cor- rectness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock- freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session π-calculus, into the standard typed π- calculus, in order to understand their expressive power.
Resumo:
In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.
Resumo:
Quando si parla di architetture di controllo in ambito Web, il Modello ad Eventi è indubbiamente quello più diffuso e adottato. L’asincronicità e l’elevata interazione con l’utente sono caratteristiche tipiche delle Web Applications, ed un architettura ad eventi, grazie all’adozione del suo tipico ciclo di controllo chiamato Event Loop, fornisce un'astrazione semplice ma sufficientemente espressiva per soddisfare tali requisiti. La crescita di Internet e delle tecnologie ad esso associate, assieme alle recenti conquiste in ambito di CPU multi-core, ha fornito terreno fertile per lo sviluppo di Web Applications sempre più complesse. Questo aumento di complessità ha portato però alla luce alcuni limiti del modello ad eventi, ancora oggi non del tutto risolti. Con questo lavoro si intende proporre un differente approccio a questa tipologia di problemi, che superi i limiti riscontrati nel modello ad eventi proponendo un architettura diversa, nata in ambito di IA ma che sta guadagno popolarità anche nel general-purpose: il Modello ad Agenti. Le architetture ad agenti adottano un ciclo di controllo simile all’Event Loop del modello ad eventi, ma con alcune profonde differenze: il Control Loop. Lo scopo di questa tesi sarà dunque approfondire le due tipologie di architetture evidenziandone le differenze, mostrando cosa significa affrontare un progetto e lo sviluppo di una Web Applications avendo tecnologie diverse con differenti cicli di controllo, mettendo in luce pregi e difetti dei due approcci.
Resumo:
After almost 10 years from “The Free Lunch Is Over” article, where the need to parallelize programs started to be a real and mainstream issue, a lot of stuffs did happened: • Processor manufacturers are reaching the physical limits with most of their approaches to boosting CPU performance, and are instead turning to hyperthreading and multicore architectures; • Applications are increasingly need to support concurrency; • Programming languages and systems are increasingly forced to deal well with concurrency. This thesis is an attempt to propose an overview of a paradigm that aims to properly abstract the problem of propagating data changes: Reactive Programming (RP). This paradigm proposes an asynchronous non-blocking approach to concurrency and computations, abstracting from the low-level concurrency mechanisms.
Resumo:
Grammars for programming languages are traditionally specified statically. They are hard to compose and reuse due to ambiguities that inevitably arise. PetitParser combines ideas from scannerless parsing, parser combinators, parsing expression grammars and packrat parsers to model grammars and parsers as objects that can be reconfigured dynamically. Through examples and benchmarks we demonstrate that dynamic grammars are not only flexible but highly practical.
Resumo:
Software must be constantly adapted due to evolving domain knowledge and unanticipated requirements changes. To adapt a system at run-time we need to reflect on its structure and its behavior. Object-oriented languages introduced reflection to deal with this issue, however, no reflective approach up to now has tried to provide a unified solution to both structural and behavioral reflection. This paper describes Albedo, a unified approach to structural and behavioral reflection. Albedo is a model of fined-grained unanticipated dynamic structural and behavioral adaptation. Instead of providing reflective capabilities as an external mechanism we integrate them deeply in the environment. We show how explicit meta-objects allow us to provide a range of reflective features and thereby evolve both application models and environments at run-time.
Resumo:
Java Enterprise Applications (JEAs) are large systems that integrate multiple technologies and programming languages. With the purpose to support the analysis of JEAs we have developed MooseJEE an extension of the \emphMoose environment capable to model the typical elements of JEAs.
Resumo:
After decades of development in programming languages and programming environments, Smalltalk is still one of few environments that provide advanced features and is still widely used in the industry. However, as Java became prevalent, the ability to call Java code from Smalltalk and vice versa becomes important. Traditional approaches to integrate the Java and Smalltalk languages are through low-level communication between separate Java and Smalltalk virtual machines. We are not aware of any attempt to execute and integrate the Java language directly in the Smalltalk environment. A direct integration allows for very tight and almost seamless integration of the languages and their objects within a single environment. Yet integration and language interoperability impose challenging issues related to method naming conventions, method overloading, exception handling and thread-locking mechanisms. In this paper we describe ways to overcome these challenges and to integrate Java into the Smalltalk environment. Using techniques described in this paper, the programmer can call Java code from Smalltalk using standard Smalltalk idioms while the semantics of each language remains preserved. We present STX:LIBJAVA - an implementation of Java virtual machine within Smalltalk/X - as a validation of our approach
Resumo:
Most of today's dynamic analysis approaches are based on method traces. However, in the case of object-orientation understanding program execution by analyzing method traces is complicated because the behavior of a program depends on the sharing and the transfer of object references (aliasing). We argue that trace-based dynamic analysis is at a too low level of abstraction for object-oriented systems. We propose a new approach that captures the life cycle of objects by explicitly taking into account object aliasing and how aliases propagate during the execution of the program. In this paper, we present in detail our new meta-model and discuss future tracks opened by it.
Resumo:
The Zagros oak forests in Western Iran are critically important to the sustainability of the region. These forests have undergone dramatic declines in recent decades. We evaluated the utility of the non-parametric Random Forest classification algorithm for land cover classification of Zagros landscapes, and selected the best spatial and spectral predictive variables. The algorithm resulted in high overall classification accuracies (>85%) and also equivalent classification accuracies for the datasets from the three different sensors. We evaluated the associations between trends in forest area and structure with trends in socioeconomic and climatic conditions, to identify the most likely driving forces creating deforestation and landscape structure change. We used available socioeconomic (urban and rural population, and rural income), and climatic (mean annual rainfall and mean annual temperature) data for two provinces in northern Zagros. The most correlated driving force of forest area loss was urban population, and climatic variables to a lesser extent. Landscape structure changes were more closely associated with rural population. We examined the effects of scale changes on the results from spatial pattern analysis. We assessed the impacts of eight years of protection in a protected area in northern Zagros at two different scales (both grain and extent). The effects of protection on the amount and structure of forests was scale dependent. We evaluated the nature and magnitude of changes in forest area and structure over the entire Zagros region from 1972 to 2009. We divided the Zagros region in 167 Landscape Units and developed two measures— Deforestation Sensitivity (DS) and Connectivity Sensitivity (CS) — for each landscape unit as the percent of the time steps that forest area and ECA experienced a decrease of greater than 10% in either measure. A considerable loss in forest area and connectivity was detected, but no sudden (nonlinear) changes were detected at the spatial and temporal scale of the study. Connectivity loss occurred more rapidly than forest loss due to the loss of connecting patches. More connectivity was lost in southern Zagros due to climatic differences and different forms of traditional land use.
Resumo:
In this paper we compare the performance of two image classification paradigms (object- and pixel-based) for creating a land cover map of Asmara, the capital of Eritrea and its surrounding areas using a Landsat ETM+ imagery acquired in January 2000. The image classification methods used were maximum likelihood for the pixel-based approach and Bhattacharyya distance for the object-oriented approach available in, respectively, ArcGIS and SPRING software packages. Advantages and limitations of both approaches are presented and discussed. Classifications outputs were assessed using overall accuracy and Kappa indices. Pixel- and object-based classification methods result in an overall accuracy of 78% and 85%, respectively. The Kappa coefficient for pixel- and object-based approaches was 0.74 and 0.82, respectively. Although pixel-based approach is the most commonly used method, assessment and visual interpretation of the results clearly reveal that the object-oriented approach has advantages for this specific case-study.