930 resultados para object-oriented software framework
Resumo:
Object-oriented programming is a widely adopted paradigm for desktop software development. This paradigm partitions software into separate entities, objects, which consist of data and related procedures used to modify and inspect it. The paradigm has evolved during the last few decades to emphasize decoupling between object implementations, via means such as explicit interface inheritance and event-based implicit invocation. Inter-process communication (IPC) technologies allow applications to interact with each other. This enables making software distributed across multiple processes, resulting in a modular architecture with benefits in resource sharing, robustness, code reuse and security. The support for object-oriented programming concepts varies between IPC systems. This thesis is focused on the D-Bus system, which has recently gained a lot of users, but is still scantily researched. D-Bus has support for asynchronous remote procedure calls with return values and a content-based publish/subscribe event delivery mechanism. In this thesis, several patterns for method invocation in D-Bus and similar systems are compared. The patterns that simulate synchronous local calls are shown to be dangerous. Later, we present a state-caching proxy construct, which avoids the complexity of properly asynchronous calls for object inspection. The proxy and certain supplementary constructs are presented conceptually as generic object-oriented design patterns. The e ect of these patterns on non-functional qualities of software, such as complexity, performance and power consumption, is reasoned about based on the properties of the D-Bus system. The use of the patterns reduces complexity, but maintains the other qualities at a good level. Finally, we present currently existing means of specifying D-Bus object interfaces for the purposes of code and documentation generation. The interface description language used by the Telepathy modular IM/VoIP framework is found to be an useful extension of the basic D-Bus introspection format.
Resumo:
Lors de ces dix dernières années, le coût de la maintenance des systèmes orientés objets s'est accru jusqu' à compter pour plus de 70% du coût total des systèmes. Cette situation est due à plusieurs facteurs, parmi lesquels les plus importants sont: l'imprécision des spécifications des utilisateurs, l'environnement d'exécution changeant rapidement et la mauvaise qualité interne des systèmes. Parmi tous ces facteurs, le seul sur lequel nous ayons un réel contrôle est la qualité interne des systèmes. De nombreux modèles de qualité ont été proposés dans la littérature pour contribuer à contrôler la qualité. Cependant, la plupart de ces modèles utilisent des métriques de classes (nombre de méthodes d'une classe par exemple) ou des métriques de relations entre classes (couplage entre deux classes par exemple) pour mesurer les attributs internes des systèmes. Pourtant, la qualité des systèmes par objets ne dépend pas uniquement de la structure de leurs classes et que mesurent les métriques, mais aussi de la façon dont celles-ci sont organisées, c'est-à-dire de leur conception, qui se manifeste généralement à travers les patrons de conception et les anti-patrons. Dans cette thèse nous proposons la méthode DEQUALITE, qui permet de construire systématiquement des modèles de qualité prenant en compte non seulement les attributs internes des systèmes (grâce aux métriques), mais aussi leur conception (grâce aux patrons de conception et anti-patrons). Cette méthode utilise une approche par apprentissage basée sur les réseaux bayésiens et s'appuie sur les résultats d'une série d'expériences portant sur l'évaluation de l'impact des patrons de conception et des anti-patrons sur la qualité des systèmes. Ces expériences réalisées sur 9 grands systèmes libres orientés objet nous permettent de formuler les conclusions suivantes: • Contre l'intuition, les patrons de conception n'améliorent pas toujours la qualité des systèmes; les implantations très couplées de patrons de conception par exemple affectent la structure des classes et ont un impact négatif sur leur propension aux changements et aux fautes. • Les classes participantes dans des anti-atrons sont beaucoup plus susceptibles de changer et d'être impliquées dans des corrections de fautes que les autres classes d'un système. • Un pourcentage non négligeable de classes sont impliquées simultanément dans des patrons de conception et dans des anti-patrons. Les patrons de conception ont un effet positif en ce sens qu'ils atténuent les anti-patrons. Nous appliquons et validons notre méthode sur trois systèmes libres orientés objet afin de démontrer l'apport de la conception des systèmes dans l'évaluation de la qualité.
Resumo:
The traditional waterfall software life cycle model has several weaknesses. One problem is that a working version of a system is unavailable until a late stage in the development; any omissions and mistakes in the specification undetected until that stage can be costly to maintain. The operational approach which emphasises the construction of executable specifications can help to remedy this problem. An operational specification may be exercised to generate the behaviours of the specified system, thereby serving as a prototype to facilitate early validation of the system's functional requirements. Recent ideas have centred on using an existing operational method such as JSD in the specification phase of object-oriented development. An explicit transformation phase following specification is necessary in this approach because differences in abstractions between the two domains need to be bridged. This research explores an alternative approach of developing an operational specification method specifically for object-oriented development. By incorporating object-oriented concepts in operational specifications, the specifications have the advantage of directly facilitating implementation in an object-oriented language without requiring further significant transformations. In addition, object-oriented concepts can help the developer manage the complexity of the problem domain specification, whilst providing the user with a specification that closely reflects the real world and so the specification and its execution can be readily understood and validated. A graphical notation has been developed for the specification method which can capture the dynamic properties of an object-oriented system. A tool has also been implemented comprising an editor to facilitate the input of specifications, and an interpreter which can execute the specifications and graphically animate the behaviours of the specified systems.
Resumo:
In analysing manufacturing systems, for either design or operational reasons, failure to account for the potentially significant dynamics could produce invalid results. There are many analysis techniques that can be used, however, simulation is unique in its ability to assess detailed, dynamic behaviour. The use of simulation to analyse manufacturing systems would therefore seem appropriate if not essential. Many simulation software products are available but their ease of use and scope of application vary greatly. This is illustrated at one extreme by simulators which offer rapid but limited application whilst at the other simulation languages which are extremely flexible but tedious to code. Given that a typical manufacturing engineer does not posses in depth programming and simulation skills then the use of simulators over simulation languages would seem a more appropriate choice. Whilst simulators offer ease of use their limited functionality may preclude their use in many applications. The construction of current simulators makes it difficult to amend or extend the functionality of the system to meet new challenges. Some simulators could even become obsolete as users, demand modelling functionality that reflects the latest manufacturing system design and operation concepts. This thesis examines the deficiencies in current simulation tools and considers whether they can be overcome by the application of object-oriented principles. Object-oriented techniques have gained in popularity in recent years and are seen as having the potential to overcome any of the problems traditionally associated with software construction. There are a number of key concepts that are exploited in the work described in this thesis: the use of object-oriented techniques to act as a framework for abstracting engineering concepts into a simulation tool and the ability to reuse and extend object-oriented software. It is argued that current object-oriented simulation tools are deficient and that in designing such tools, object -oriented techniques should be used not just for the creation of individual simulation objects but for the creation of the complete software. This results in the ability to construct an easy to use simulator that is not limited by its initial functionality. The thesis presents the design of an object-oriented data driven simulator which can be freely extended. Discussion and work is focused on discrete parts manufacture. The system developed retains the ease of use typical of data driven simulators. Whilst removing any limitation on its potential range of applications. Reference is given to additions made to the simulator by other developers not involved in the original software development. Particular emphasis is put on the requirements of the manufacturing engineer and the need for Ihe engineer to carrv out dynamic evaluations.
Resumo:
Monet teollisuuden konenäkö- ja hahmontunnistusongelmat ovat hyvin samantapaisia, jolloin prototyyppisovelluksia suunniteltaessa voitaisiin hyödyntää pitkälti samoja komponentteja. Oliopohjaiset sovelluskehykset tarjoavat erinomaisen tavan nopeuttaa ohjelmistokehitystä uudelleenkäytettävyyttä parantamalla. Näin voidaan sekä mahdollistaa konenäkösovellusten laajempi käyttö että säästää kustannuksissa. Tässä työssä esitellään konenäkösovelluskehys, joka on perusarkkitehtuuriltaan liukuhihnamainen. Ylätason rakenne koostuu sensorista, datankäsittelyoperaatioista, piirreirrottimesta sekä luokittimesta. Itse sovelluskehyksen lisäksi on toteutettu joukko kuvankäsittely- ja hahmontunnistusoperaatioita. Sovelluskehys nopeuttaa selvästi ohjelmointityötä ja helpottaa uusien kuvankäsittelyoperaatioiden lisää mistä.
Resumo:
In this paper we describe a novel, extensible visualization system currently under development at Aston University. We introduce modern programming methods, such as the use of data driven programming, design patterns, and the careful definition of interfaces to allow easy extension using plug-ins, to 3D landscape visualization software. We combine this with modern developments in computer graphics, such as vertex and fragment shaders, to create an extremely flexible, extensible real-time near photorealistic visualization system. In this paper we show the design of the system and the main sub-components. We stress the role of modern programming practices and illustrate the benefits these bring to 3D visualization. © 2006 Springer-Verlag Berlin Heidelberg.
Resumo:
Tässä työssä on esitetty sen ohjelmiston kehittämisen prosessi, joka on tarkoitettu annettavien palveluiden valvottavaksi käyttäen prototyyppimallia. Raportti sisältää vaatimusten, kohteisiin suunnatun analyysin ja suunnittelun, realisointiprosessien kuvauksen ja prototyypin testauksen. Ohjelmiston käyttöala – antavien palveluiden valvonta. Vaatimukset sovellukselle analysoitiin ohjelmistomarkkinoiden perusteella sekä ohjelmiston engineeringin periaatteiden mukaisesti. Ohjelmiston prototyyppi on realisoitu käyttäen asiakas-/palvelinhybridimallia sekä ralaatiokantaa. Kehitetty ohjelmisto on tarkoitettu venäläisille tietokonekerhoille, jotka erikoistuvat pelipalvelinten antamiseen.
Resumo:
This paper presents the development of a two-dimensional interactive software environment for structural analysis and optimization based on object-oriented programming using the C++ language. The main feature of the software is the effective integration of several computational tools into graphical user interfaces implemented in the Windows-98 and Windows-NT operating systems. The interfaces simplify data specification in the simulation and optimization of two-dimensional linear elastic problems. NURBS have been used in the software modules to represent geometric and graphical data. Extensions to the analysis of three-dimensional problems have been implemented and are also discussed in this paper.
Resumo:
ICT contributed to about 0.83 GtCO2 emissions where the 37% comes from the telecoms infrastructures. At the same time, the increasing cost of energy has been hindering the industry in providing more affordable services for the users. One of the sources of these problems is said to be the rigidity of the current network infrastructures which limits innovations in the network. SDN (Software Defined Network) has emerged as one of the prominent solutions with its idea of abstraction, visibility, and programmability in the network. Nevertheless, there are still significant efforts needed to actually utilize it to create a more energy and environmentally friendly network. In this paper, we suggested and developed a platform for developing ecology-related SDN applications. The main approach we take in realizing this goal is by maximizing the abstractions provided by OpenFlow and to expose RESTful interfaces to modules which enable energy saving in the network. While OpenFlow is made to be the standard for SDN protocol, there are still some mechanisms not defined in its specification such as settings related to Quality of Service (QoS). To solve this, we created REST interfaces for setting of QoS in the switches which can maximize network utilization. We also created a module for minimizing the required network resources in delivering packets across the network. This is achieved by utilizing redundant links when it is needed, but disabling them when the load in the network decreases. The usage of multi paths in a network is also evaluated for its benefit in terms of transfer rate improvement and energy savings. Hopefully, the developed framework can be beneficial for developers in creating applications for supporting environmentally friendly network infrastructures.
Resumo:
Abstract interpreters rely on the existence of a nxpoint algorithm that calculates a least upper bound approximation of the semantics of the program. Usually, that algorithm is described in terms of the particular language in study and therefore it is not directly applicable to programs written in a different source language. In this paper we introduce a generic, block-based, and uniform representation of the program control flow graph and a language-independent nxpoint algorithm that can be applied to a variety of languages and, in particular, Java. Two major characteristics of our approach are accuracy (obtained through a topdown, context sensitive approach) and reasonable efficiency (achieved by means of memoization and dependency tracking techniques). We have also implemented the proposed framework and show some initial experimental results for standard benchmarks, which further support the feasibility of the solution adopted.
Resumo:
Las pruebas de software (Testing) son en la actualidad la técnica más utilizada para la validación y la evaluación de la calidad de un programa. El testing está integrado en todas las metodologías prácticas de desarrollo de software y juega un papel crucial en el éxito de cualquier proyecto de software. Desde las unidades de código más pequeñas a los componentes más complejos, su integración en un sistema de software y su despliegue a producción, todas las piezas de un producto de software deben ser probadas a fondo antes de que el producto de software pueda ser liberado a un entorno de producción. La mayor limitación del testing de software es que continúa siendo un conjunto de tareas manuales, representando una buena parte del coste total de desarrollo. En este escenario, la automatización resulta fundamental para aliviar estos altos costes. La generación automática de casos de pruebas (TCG, del inglés test case generation) es el proceso de generar automáticamente casos de prueba que logren un alto recubrimiento del programa. Entre la gran variedad de enfoques hacia la TCG, esta tesis se centra en un enfoque estructural de caja blanca, y más concretamente en una de las técnicas más utilizadas actualmente, la ejecución simbólica. En ejecución simbólica, el programa bajo pruebas es ejecutado con expresiones simbólicas como argumentos de entrada en lugar de valores concretos. Esta tesis se basa en un marco general para la generación automática de casos de prueba dirigido a programas imperativos orientados a objetos (Java, por ejemplo) y basado en programación lógica con restricciones (CLP, del inglés constraint logic programming). En este marco general, el programa imperativo bajo pruebas es primeramente traducido a un programa CLP equivalente, y luego dicho programa CLP es ejecutado simbólicamente utilizando los mecanismos de evaluación estándar de CLP, extendidos con operaciones especiales para el tratamiento de estructuras de datos dinámicas. Mejorar la escalabilidad y la eficiencia de la ejecución simbólica constituye un reto muy importante. Es bien sabido que la ejecución simbólica resulta impracticable debido al gran número de caminos de ejecución que deben ser explorados y a tamaño de las restricciones que se deben manipular. Además, la generación de casos de prueba mediante ejecución simbólica tiende a producir un número innecesariamente grande de casos de prueba cuando es aplicada a programas de tamaño medio o grande. Las contribuciones de esta tesis pueden ser resumidas como sigue. (1) Se desarrolla un enfoque composicional basado en CLP para la generación de casos de prueba, el cual busca aliviar el problema de la explosión de caminos interprocedimiento analizando de forma separada cada componente (p.ej. método) del programa bajo pruebas, almacenando los resultados y reutilizándolos incrementalmente hasta obtener resultados para el programa completo. También se ha desarrollado un enfoque composicional basado en especialización de programas (evaluación parcial) para la herramienta de ejecución simbólica Symbolic PathFinder (SPF). (2) Se propone una metodología para usar información del consumo de recursos del programa bajo pruebas para guiar la ejecución simbólica hacia aquellas partes del programa que satisfacen una determinada política de recursos, evitando la exploración de aquellas partes del programa que violan dicha política. (3) Se propone una metodología genérica para guiar la ejecución simbólica hacia las partes más interesantes del programa, la cual utiliza abstracciones como generadores de trazas para guiar la ejecución de acuerdo a criterios de selección estructurales. (4) Se propone un nuevo resolutor de restricciones, el cual maneja eficientemente restricciones sobre el uso de la memoria dinámica global (heap) durante ejecución simbólica, el cual mejora considerablemente el rendimiento de la técnica estándar utilizada para este propósito, la \lazy initialization". (5) Todas las técnicas propuestas han sido implementadas en el sistema PET (el enfoque composicional ha sido también implementado en la herramienta SPF). Mediante evaluación experimental se ha confirmado que todas ellas mejoran considerablemente la escalabilidad y eficiencia de la ejecución simbólica y la generación de casos de prueba. ABSTRACT Testing is nowadays the most used technique to validate software and assess its quality. It is integrated into all practical software development methodologies and plays a crucial role towards the success of any software project. From the smallest units of code to the most complex components and their integration into a software system and later deployment; all pieces of a software product must be tested thoroughly before a software product can be released. The main limitation of software testing is that it remains a mostly manual task, representing a large fraction of the total development cost. In this scenario, test automation is paramount to alleviate such high costs. Test case generation (TCG) is the process of automatically generating test inputs that achieve high coverage of the system under test. Among a wide variety of approaches to TCG, this thesis focuses on structural (white-box) TCG, where one of the most successful enabling techniques is symbolic execution. In symbolic execution, the program under test is executed with its input arguments being symbolic expressions rather than concrete values. This thesis relies on a previously developed constraint-based TCG framework for imperative object-oriented programs (e.g., Java), in which the imperative program under test is first translated into an equivalent constraint logic program, and then such translated program is symbolically executed by relying on standard evaluation mechanisms of Constraint Logic Programming (CLP), extended with special treatment for dynamically allocated data structures. Improving the scalability and efficiency of symbolic execution constitutes a major challenge. It is well known that symbolic execution quickly becomes impractical due to the large number of paths that must be explored and the size of the constraints that must be handled. Moreover, symbolic execution-based TCG tends to produce an unnecessarily large number of test cases when applied to medium or large programs. The contributions of this dissertation can be summarized as follows. (1) A compositional approach to CLP-based TCG is developed which overcomes the inter-procedural path explosion by separately analyzing each component (method) in a program under test, stowing the results as method summaries and incrementally reusing them to obtain whole-program results. A similar compositional strategy that relies on program specialization is also developed for the state-of-the-art symbolic execution tool Symbolic PathFinder (SPF). (2) Resource-driven TCG is proposed as a methodology to use resource consumption information to drive symbolic execution towards those parts of the program under test that comply with a user-provided resource policy, avoiding the exploration of those parts of the program that violate such policy. (3) A generic methodology to guide symbolic execution towards the most interesting parts of a program is proposed, which uses abstractions as oracles to steer symbolic execution through those parts of the program under test that interest the programmer/tester most. (4) A new heap-constraint solver is proposed, which efficiently handles heap-related constraints and aliasing of references during symbolic execution and greatly outperforms the state-of-the-art standard technique known as lazy initialization. (5) All techniques above have been implemented in the PET system (and some of them in the SPF tool). Experimental evaluation has confirmed that they considerably help towards a more scalable and efficient symbolic execution and TCG.
Resumo:
Jackson System Development (JSD) is an operational software development method which addresses most of the software lifecycle either directly or by providing a framework into which more specialised techniques can fit. The method has two major phases: first an abstract specification is derived that is in principle executable; second the specification is implemented using a variety of transformations. The object oriented paradigm is based on data abstraction and encapsulation coupled to an inheritance architecture that is able to support software reuse. Its claims of improved programmer productivity and easier program maintenance make it an important technology to be considered for building complex software systems. The mapping of JSD specifications into procedural languages typified by Cobol, Ada, etc., involves techniques such as inversion and state vector separation to produce executable systems of acceptable performance. However, at present, no strategy exists to map JSD specifications into object oriented languages. The aim of this research is to investigate the relationship between JSD and the object oriented paradigm, and to identify and implement transformations capable of mapping JSD specifications into an object oriented language typified by Smalltalk-80. The direction which the transformational strategy follows is one whereby the concurrency of a specification is removed. Two approaches implementing inversion - an architectural transformation resulting in a simulated coroutine mechanism being generated - are described in detail. The first approach directly realises inversions by manipulating Smalltalk-80 system contexts. This is possible in Smalltalk-80 because contexts are first class objects and are accessible to the user like any other system object. However, problems associated with this approach are expounded. The second approach realises coroutine-like behaviour in a structure called a `followmap'. A followmap is the results of a transformation on a JSD process in which a collection of followsets is generated. Each followset represents all possible state transitions a process can undergo from the current state of the process. Followsets, together with exploitation of the class/instance mechanism for implementing state vector separation, form the basis for mapping JSD specifications into Smalltalk-80. A tool, which is also built in Smalltalk-80, supports these derived transformations and enables a user to generate Smalltalk-80 prototypes of JSD specifications.
Resumo:
A Case-Based Reasoning (CBR) tool is software that can be used to develop several applications that require cased-based reasoning methodology. CBR shells are kind of application generators with graphical user interface. They can be used by non-programmer users but the extension or integration of new components in these tools is not possible. In this paper we analyzed three CBR object-oriented framework development environments CBR*Tools, CAT-CBR, and JColibri. These frameworks work as open software development environment and facilitate the reuse of their design as well as implementations.
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.