216 resultados para Concurrency


Relevância:

20.00% 20.00%

Publicador:

Resumo:

El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an undergraduate course on concurrent programming where formal models are used in different stages of the learning process. The main practical difference with other approaches lies in the fact that the ability to develop correct concurrent software relies on a systematic transformation of formal models of inter-process interaction (so called shared resources), rather than on the specific constructs of some programming language. Using a resource-centric rather than a language-centric approach has some benefits for both teachers and students. Besides the obvious advantage of being independent of the programming language, the models help in the early validation of concurrent software design, provide students and teachers with a lingua franca that greatly simplifies communication at the classroom and during supervision, and help in the automatic generation of tests for the practical assignments. This method has been in use, with slight variations, for some 15 years, surviving changes in the programming language and course length. In this article, we describe the components and structure of the current incarnation of the course?which uses Java as target language?and some tools used to support our method. We provide a detailed description of the different outcomes that the model-driven approach delivers (validation of the initial design, automatic generation of tests, and mechanical generation of code) from a teaching perspective. A critical discussion on the perceived advantages and risks of our approach follows, including some proposals on how these risks can be minimized. We include a statistical analysis to show that our method has a positive impact in the student ability to understand concurrency and to generate correct code.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"UILU-ENG 83 1723"--Cover.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a formal model of Java concurrency using the Object-Z specification language. This model captures the Java thread synchronization concepts of locking, blocking, waiting and notification. In the model, we take a viewpoints approach, first capturing the role of the objects and threads, and then taking a system view where we capture the way the objects and threads cooperate and communicate. As a simple illustration of how the model can, in general be applied, we use Object-Z inheritance to integrate the model with the classical producer-consumer system to create a specification directly incorporating the Java concurrency constructs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Java programming language supports concurrency. Concurrent programs are hard to test due to their inherent non-determinism. This paper presents a classification of concurrency failures that is based on a model of Java concurrency. The model and failure classification is used to justify coverage of synchronization primitives of concurrent components. This is achieved by constructing concurrency flow graphs for each method call. A producer-consumer monitor is used to demonstrate how the approach can be used to measure coverage of concurrency primitives and thereby assist in determining test sequences for deterministic execution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With service interaction modelling, it is customary to distinguish between two types of models: choreographies and orchestrations. A choreography describes interactions within a collection of services from a global perspective, where no service plays a privileged role. Instead, services interact in a peer-to-peer manner. In contrast, an orchestration describes the interactions between one particular service, the orchestrator, and a number of partner services. The main proposition of this work is an approach to bridge these two modelling viewpoints by synthesising orchestrators from choreographies. To start with, choreographies are defined using a simple behaviour description language based on communicating finite state machines. From such a model, orchestrators are initially synthesised in the form of state machines. It turns out that state machines are not suitable for orchestration modelling, because orchestrators generally need to engage in concurrent interactions. To address this issue, a technique is proposed to transform state machines into process models in the Business Process Modelling Notation (BPMN). Orchestrations represented in BPMN can then be augmented with additional business logic to achieve value-adding mediation. In addition, techniques exist for refining BPMN models into executable process definitions. The transformation from state machines to BPMN relies on Petri nets as an intermediary representation and leverages techniques from theory of regions to identify concurrency in the initial Petri net. Once concurrency has been identified, the resulting Petri net is transformed into a BPMN model. The original contributions of this work are: an algorithm to synthesise orchestrators from choreographies and a rules-based transformation from Petri nets into BPMN.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software transactional memory has the potential to greatly simplify development of concurrent software, by supporting safe composition of concurrent shared-state abstractions. However, STM semantics are defined in terms of low-level reads and writes on individual memory locations, so implementations are unable to take advantage of the properties of user-defined abstractions. Consequently, the performance of transactions over some structures can be disappointing. ----- ----- We present Modular Transactional Memory, our framework which allows programmers to extend STM with concurrency control algorithms tailored to the data structures they use in concurrent programs. We describe our implementation in Concurrent Haskell, and two example structures: a finite map which allows concurrent transactions to operate on disjoint sets of keys, and a non-deterministic channel which supports concurrent sources and sinks. ----- ----- Our approach is based on previous work by others on boosted and open-nested transactions, with one significant development: transactions are given types which denote the concurrency control algorithms they employ. Typed transactions offer a higher level of assurance for programmers reusing transactional code, and allow more flexible abstract concurrency control.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Petri nets are often used to model and analyze workflows. Many workflow languages have been mapped onto Petri nets in order to provide formal semantics or to verify correctness properties. Typically, the so-called Workflow nets are used to model and analyze workflows and variants of the classical soundness property are used as a correctness notion. Since many workflow languages have cancelation features, a mapping to workflow nets is not always possible. Therefore, it is interesting to consider workflow nets with reset arcs. Unfortunately, soundness is undecidable for workflow nets with reset arcs. In this paper, we provide a proof and insights into the theoretical limits of workflow verification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional workflow systems focus on providing support for the control-flow perspective of a business process, with other aspects such as data management and work distribution receiving markedly less attention. A guide to desirable workflow characteristics is provided by the well-known workflow patterns which are derived from a comprehensive survey of contemporary tools and modelling formalisms. In this paper we describe the approach taken to designing the newYAWL workflow system, an offering that aims to provide comprehensive support for the control-flow, data and resource perspectives based on the workflow patterns. The semantics of the newYAWL workflow language are based on Coloured Petri Nets thus facilitating the direct enactment and analysis of processes described in terms of newYAWL language constructs. As part of this discussion, we explain how the operational semantics for each of the language elements are embodied in the newYAWL system and indicate the facilities required to support them in an operational environment. We also review the experiences associated with developing a complete operational design for an offering of this scale using formal techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well accepted that different types of distributed architectures require different degrees of coupling. For example, in client-server and three-tier architectures, application components are generally tightly coupled, both with one another and with the underlying middleware. Meanwhile, in off-line transaction processing, grid computing and mobile applications, the degree of coupling between application components and with the underlying middleware needs to be minimized. Terms such as ‘synchronous’, ‘asynchronous’, ‘blocking’, ‘non-blocking’, ‘directed’, and ‘non-directed’ are often used to refer to the degree of coupling required by an architecture or provided by a middleware. However, these terms are used with various connotations. Although various informal definitions have been provided, there is a lack of an overarching formal framework to unambiguously communicate architectural requirements with respect to (de-)coupling. This article addresses this gap by: (i) formally defining three dimensions of (de-)coupling; (ii) relating these dimensions to existing middleware; and (iii) proposing notational elements to represent various coupling integration patterns. This article also discusses a prototype that demonstrates the feasibility of its implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The field of workflow technology has burgeoned in recent years providing a variety of means of automating business processes. It is a great source of opportunity for organisations seeking to streamline and optimise their operations. Despite these advantages however, the current generation of workflow technologies are subject to a variety of criticisms, in terms of their restricted view of what comprises a business process, their imprecise definition and their general inflexibility. As a remedy to these potential difficulties, in this paper we propose a series of development goals for the next generation of workflow technology. We also present newYAWL, a formally defined, multi-perspective reference language for workflow systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bioinformatics is dominated by online databases and sophisticated web-accessible tools. As such, it is ideally placed to benefit from the rapid, purpose specific combination of services achievable via web mashups. The recent introduction of a number of sophisticated frameworks has greatly simplified the mashup creation process, making them accessible to scientists with limited programming expertise. In this paper we investigate the feasibility of mashups as a new approach to bioinformatic experimentation, focusing on an exploratory niche between interactive web usage and robust workflows, and attempting to identify the range of computations for which mashups may be employed. While we treat each of the major frameworks, we illustrate the ideas with a series of examples developed under the Popfly framework

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasingly scientists are using collections of software tools in their research. These tools are typically used in concert, often necessitating laborious and error-prone manual data reformatting and transfer. We present an intuitive workflow environment to support scientists with their research. The workflow, GPFlow, wraps legacy tools, presenting a high level, interactive web-based front end to scientists. The workflow backend is realized by a commercial grade workflow engine (Windows Workflow Foundation). The workflow model is inspired by spreadsheets and is novel in its support for an intuitive method of interaction enabling experimentation as required by many scientists, e.g. bioinformaticians. We apply GPFlow to two bioinformatics experiments and demonstrate its flexibility and simplicity.