980 resultados para Testing aspect-oriented programs
Resumo:
The World Wide Web has been consolidated over the last years as a standard platform to provide software systems in the Internet. Nowadays, a great variety of user applications are available on the Web, varying from corporate applications to the banking domain, or from electronic commerce to the governmental domain. Given the quantity of information available and the quantity of users dealing with their services, many Web systems have sought to present recommendations of use as part of their functionalities, in order to let the users to have a better usage of the services available, based on their profile, history navigation and system use. In this context, this dissertation proposes the development of an agent-based framework that offers recommendations for users of Web systems. It involves the conception, design and implementation of an object-oriented framework. The framework agents can be plugged or unplugged in a non-invasive way in existing Web applications using aspect-oriented techniques. The framework is evaluated through its instantiation to three different Web systems
Resumo:
Aspect-Oriented Software Development (AOSD) is a technique that complements the Object- Oriented Software Development (OOSD) modularizing several concepts that OOSD approaches do not modularize appropriately. However, the current state-of-the art on AOSD suffers with software evolution, mainly because aspect definition can stop to work correctly when base elements evolve. A promising approach to deal with that problem is the definition of model-based pointcuts, where pointcuts are defined based on a conceptual model. That strategy makes pointcut less prone to software evolution than model-base elements. Based on that strategy, this work defines a conceptual model at high abstraction level where we can specify software patterns and architectures that through Model Driven Development techniques they can be instantiated and composed in architecture description language that allows aspect modeling at architecture level. Our MDD approach allows propagate concepts in architecture level to another abstraction levels (design level, for example) through MDA transformation rules. Also, this work shows a plug-in implemented to Eclipse platform called AOADLwithCM. That plug-in was created to support our development process. The AOADLwithCM plug-in was used to describe a case study based on MobileMedia System. MobileMedia case study shows step-by-step how the Conceptual Model approach could minimize Pointcut Fragile Problems, due to software evolution. MobileMedia case study was used as input to analyses evolutions on software according to software metrics proposed by KHATCHADOURIAN, GREENWOOD and RASHID. Also, we analyze how evolution in base model could affect maintenance on aspectual model with and without Conceptual Model approaches
Resumo:
New programming language paradigms have commonly been tested and eventually incorporated into hardware description languages. Recently, aspect-oriented programming (AOP) has shown successful in improving the modularity of object-oriented and structured languages such Java, C++ and C. Thus, one can expect that, using AOP, one can improve the understanding of the hardware systems under design, as well as make its components more reusable and easier to maintain. We apply AOP in applications developed using the SystemC library. Several examples will be presented illustrating how to combine AOP and SystemC. During the presentation of these examples, the benefits of this new approach will also be discussed
Resumo:
Nowadays, there are many aspect-oriented middleware implementations that take advantage of the modularity provided by the aspect oriented paradigm. Although the works always present an assessment of the middleware according to some quality attribute, there is not a specific set of metrics to assess them in a comprehensive way, following various quality attributes. This work aims to propose a suite of metrics for the assessment of aspect-oriented middleware systems at different development stages: design, refactoring, implementation and runtime. The work presents the metrics and how they are applied at each development stage. The suite is composed of metrics associated to static properties (modularity, maintainability, reusability, exibility, complexity, stability, and size) and dynamic properties (performance and memory consumption). Such metrics are based on existing assessment approaches of object-oriented and aspect-oriented systems. The proposed metrics are used in the context of OiL (Orb in Lua), a middleware based on CORBA and implemented in Lua, and AO-OiL, the refactoring of OIL that follows a reference architecture for aspect-oriented middleware systems. The case study performed in OiL and AO-OiL is a system for monitoring of oil wells. This work also presents the CoMeTA-Lua tool to automate the collection of coupling and size metrics in Lua source code
Resumo:
Purpose - This paper proposes an interpolating approach of the element-free Galerkin method (EFGM) coupled with a modified truncation scheme for solving Poisson's boundary value problems in domains involving material non-homogeneities. The suitability and efficiency of the proposed implementation are evaluated for a given set of test cases of electrostatic field in domains involving different material interfaces.Design/methodology/approach - the authors combined an interpolating approximation with a modified domain truncation scheme, which avoids additional techniques for enforcing the Dirichlet boundary conditions and for dealing with material interfaces usually employed in meshfree formulations.Findings - the local electric potential and field distributions were correctly described as well as the global quantities like the total potency and resistance. Since, the treatment of the material interfaces becomes practically the same for both the finite element method (FEM) and the proposed EFGM, FEM-oriented programs can, thus, be easily extended to provide EFGM approximations.Research limitations/implications - the robustness of the proposed formulation became evident from the error analyses of the local and global variables, including in the case of high-material discontinuity.Practical implications - the proposed approach has shown to be as robust as linear FEM. Thus, it becomes an attractive alternative, also because it avoids the use of additional techniques to deal with boundary/interface conditions commonly employed in meshfree formulations.Originality/value - This paper reintroduces the domain truncation in the EFGM context, but by using a set of interpolating shape functions the authors avoided the use of Lagrange multipliers as well Mathematics in Engineering high-material discontinuity.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The Software Engineering originated with the motivation to mass produce components for increased productivity in production systems. Since its origins, numerous studies have been proposed on the subject as new features in the creation of systems, like the Object- Oriented Programming and Aspect-Oriented Programming, have been established and methodologies have been developed to control them efficiently. However, years of studies in the area were not sufficient to create a methodology for reusing software artifacts really efficient and easy enough to be widespread. Given this, the Model-Driven Development (MDD) is trying to promote it using the modeling of systems as a reference, becoming part of it and establishing a huge productivity gain. One of his approaches is called Model-Driven Software Development (MDSD), which focuses on improving the practices and systems development using Domain-Specific Languages (DSL) for this purpose. In this Final Paper, Xtext is used as a tool to prove the productivity and efficiency of this approach, and for that bibliographic studies were made on the approach and the tool, and show the methodology and a case study to demonstrate results and conclusions regarding this work
Resumo:
In the first paper presented to you today by Dr. Spencer, an expert in the Animal Biology field and an official authority at the same time, you heard about the requirements imposed on a chemical in order to pass the different official hurdles before it ever will be accepted as a proven tool in wildlife management. Many characteristics have to be known and highly sophisticated tests have to be run. In many instances the governmental agency maintains its own screening, testing or analytical programs according to standard procedures. It would be impossible, however, for economic and time reasons to work out all the data necessary for themselves. They, therefore, depend largely on the information furnished by the individual industry which naturally has to be established as conscientiously as possible. This, among other things, Dr. Spencer has made very clear; and this is also what makes quite a few headaches for the individual industry, but I am certainly not speaking only for myself in saying that Industry fully realizes this important role in developing materials for vertebrate control and the responsibilities lying in this. This type of work - better to say cooperative work with the official institutions - is, however, only one part and for the most of it, the smallest part of work which Industry pays to the development of compounds for pest control. It actually refers only to those very few compounds which are known to be effective. But how to get to know about their properties in the first place? How does Industry make the selection from the many thousands of compounds synthesized each year? This, by far, creates the biggest problems, at least from the scientific and technical standpoint. Let us rest here for a short while and think about the possible ways of screening and selecting effective compounds. Basically there are two different ways. One is the empirical way of screening as big a number of compounds as possible under the supposition that with the number of incidences the chances for a "hit" increase, too. You can also call this type of approach the statistical or the analytical one, the mass screening of new, mostly unknown candidate materials. This type of testing can only be performed by a producer of many new materials,that means by big industries. It requires a tremendous investment in personnel, time and equipment and is based on highly simplified but indicative test methods, the results of which would have to be reliable and representative for practical purposes. The other extreme is the intellectual way of theorizing effective chemical configurations. Defenders of this method claim to now or later be able to predict biological effectiveness on the basis of the chemical structure or certain groups in it. Certain pre-experience should be necessary, that means knowledge of the importance of certain molecular requirements, then the detection of new and effective complete molecules is a matter of coordination to be performed by smart people or computers. You can also call this method the synthetical or coordinative method.
Resumo:
Abstract Background Over the last years, a number of researchers have investigated how to improve the reuse of crosscutting concerns. New possibilities have emerged with the advent of aspect-oriented programming, and many frameworks were designed considering the abstractions provided by this new paradigm. We call this type of framework Crosscutting Frameworks (CF), as it usually encapsulates a generic and abstract design of one crosscutting concern. However, most of the proposed CFs employ white-box strategies in their reuse process, requiring two mainly technical skills: (i) knowing syntax details of the programming language employed to build the framework and (ii) being aware of the architectural details of the CF and its internal nomenclature. Also, another problem is that the reuse process can only be initiated as soon as the development process reaches the implementation phase, preventing it from starting earlier. Method In order to solve these problems, we present in this paper a model-based approach for reusing CFs which shields application engineers from technical details, letting him/her concentrate on what the framework really needs from the application under development. To support our approach, two models are proposed: the Reuse Requirements Model (RRM) and the Reuse Model (RM). The former must be used to describe the framework structure and the later is in charge of supporting the reuse process. As soon as the application engineer has filled in the RM, the reuse code can be automatically generated. Results We also present here the result of two comparative experiments using two versions of a Persistence CF: the original one, whose reuse process is based on writing code, and the new one, which is model-based. The first experiment evaluated the productivity during the reuse process, and the second one evaluated the effort of maintaining applications developed with both CF versions. The results show the improvement of 97% in the productivity; however little difference was perceived regarding the effort for maintaining the required application. Conclusion By using the approach herein presented, it was possible to conclude the following: (i) it is possible to automate the instantiation of CFs, and (ii) the productivity of developers are improved as long as they use a model-based instantiation approach.
Resumo:
Writing unit tests for legacy systems is a key maintenance task. When writing tests for object-oriented programs, objects need to be set up and the expected effects of executing the unit under test need to be verified. If developers lack internal knowledge of a system, the task of writing tests is non-trivial. To address this problem, we propose an approach that exposes side effects detected in example runs of the system and uses these side effects to guide the developer when writing tests. We introduce a visualization called Test Blueprint, through which we identify what the required fixture is and what assertions are needed to verify the correct behavior of a unit under test. The dynamic analysis technique that underlies our approach is based on both tracing method executions and on tracking the flow of objects at runtime. To demonstrate the usefulness of our approach we present results from two case studies.
Resumo:
Mainstream IDEs such as Eclipse support developers in managing software projects mainly by offering static views of the source code. Such a static perspective neglects any information about runtime behavior. However, object-oriented programs heavily rely on polymorphism and late-binding, which makes them difficult to understand just based on their static structure. Developers thus resort to debuggers or profilers to study the system's dynamics. However, the information provided by these tools is volatile and hence cannot be exploited to ease the navigation of the source space. In this paper we present an approach to augment the static source perspective with dynamic metrics such as precise runtime type information, or memory and object allocation statistics. Dynamic metrics can leverage the understanding for the behavior and structure of a system. We rely on dynamic data gathering based on aspects to analyze running Java systems. By solving concrete use cases we illustrate how dynamic metrics directly available in the IDE are useful. We also comprehensively report on the efficiency of our approach to gather dynamic metrics.
Resumo:
This paper introduces a novel technique for identifying logically related sections of the heap such as recursive data structures, objects that are part of the same multi-component structure, and related groups of objects stored in the same collection/array. When combined withthe lifetime properties of these structures, this information can be used to drive a range of program optimizations including pool allocation, object co-location, static deallocation, and region-based garbage collection. The technique outlined in this paper also improves the efficiency of the static analysis by providing a normal form for the abstract models (speeding the convergence of the static analysis). We focus on two techniques for grouping parts of the heap. The first is a technique for precisely identifying recursive data structures in object-oriented programs based on the types declared in the program. The second technique is a novel method for grouping objects that make up the same composite structure and that allows us to partition the objects stored in a collection/array into groups based on a similarity relation. We provide a parametric component in the similarity relation in order to support specific analysis applications (such as a numeric analysis which would need to partition the objects based on numeric properties of the fields). Using the Barnes-Hut benchmark from the JOlden suite we show how these grouping methods can be used to identify various types of logical structures allowing the application of many region-based program optimizations.
Resumo:
Finding useful sharing information between instances in object- oriented programs has been recently the focus of much research. The applications of such static analysis are multiple: by knowing which variables share in memory we can apply conventional compiler optimizations, find coarse-grained parallelism opportunities, or, more importantly,erify certain correctness aspects of programs even in the absence of annotations In this paper we introduce a framework for deriving precise sharing information based on abstract interpretation for a Java-like language. Our analysis achieves precision in various ways. The analysis is multivariant, which allows separating different contexts. We propose a combined Set Sharing + Nullity + Classes domain which captures which instances share and which ones do not or are definitively null, and which uses the classes to refine the static information when inheritance is present. Carrying the domains in a combined way facilitates the interaction among the domains in the presence of mutivariance in the analysis. We show that both the set sharing part of the domain as well as the combined domain provide more accurate information than previous work based on pair sharing domains, at reasonable cost.
Resumo:
El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.
Resumo:
Most comparative studies of public policies for competitiveness focus on the links among public agencies and industrial sectors. This paper argues that the professions---or knowledge-bearing elites-that animate these organizational links are equally significant. For public policies to promote technological advance, the visions and self-images of knowledge-bearing elites are par ticularly important. By examining administrative and technical elites in France and Germany in the 1980s, the paper identifies characteristics that enable these elites to implement policy in some cases, but not in others. France's "state-created" elites were well-positioned to initiate and implement large technology projects, such as digitizing the telecommunications network. Germany's state-recognized elites were, by contrast, better positioned to facilitate framework oriented programs that aimed at the diffusion of new technologies throughout industry. The linkages among administrative and technical elites also explain why French policymakers had difficulty adapting policy to changing circumstances over time while German policymakers managed in many cases to learn more from previous policy experiences and to adapt subsequent initiatives accordingly.