975 resultados para Shipping process without debit


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fieldbus communication networks aim to interconnect sensors, actuators and controllers within distributed computer-controlled systems. Therefore they constitute the foundation upon which real-time applications are to be implemented. A potential leap towards the use of fieldbus in such time-critical applications lies in the evaluation of its temporal behaviour. In the past few years several research works have been performed on a number of fieldbuses. However, these have mostly focused on the message passing mechanisms, without taking into account the communicating application tasks running in those distributed systems. The main contribution of this paper is to provide an approach for engineering real-time fieldbus systems where the schedulability analysis of the distributed system integrates both the characteristics of the application tasks and the characteristics of the message transactions performed by these tasks. In particular, we address the case of system where the Process-Pascal multitasking language is used to develop P-NET based distributed applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Química e Biológica

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the past few years Tabling has emerged as a powerful logic programming model. The integration of concurrent features into the implementation of Tabling systems is demanded by need to use recently developed tabling applications within distributed systems, where a process has to respond concurrently to several requests. The support for sharing of tables among the concurrent threads of a Tabling process is a desirable feature, to allow one of Tabling’s virtues, the re-use of computations by other threads and to allow efficient usage of available memory. However, the incremental completion of tables which are evaluated concurrently is not a trivial problem. In this dissertation we describe the integration of concurrency mechanisms, by the way of multi-threading, in a state of the art Tabling and Prolog system, XSB. We begin by reviewing the main concepts for a formal description of tabled computations, called SLG resolution and for the implementation of Tabling under the SLG-WAM, the abstract machine supported by XSB. We describe the different scheduling strategies provided by XSB and introduce some new properties of local scheduling, a scheduling strategy for SLG resolution. We proceed to describe our implementation work by describing the process of integrating multi-threading in a Prolog system supporting Tabling, without addressing the problem of shared tables. We describe the trade-offs and implementation decisions involved. We then describe an optimistic algorithm for the concurrent sharing of completed tables, Shared Completed Tables, which allows the sharing of tables without incurring in deadlocks, under local scheduling. This method relies on the execution properties of local scheduling and includes full support for negation. We provide a theoretical framework and discuss the implementation’s correctness and complexity. After that, we describe amethod for the sharing of tables among threads that allows parallelism in the computation of inter-dependent subgoals, which we name Concurrent Completion. We informally argue for the correctness of Concurrent Completion. We give detailed performance measurements of the multi-threaded XSB systems over a variety of machines and operating systems, for both the Shared Completed Tables and the Concurrent Completion implementations. We focus our measurements inthe overhead over the sequential engine and the scalability of the system. We finish with a comparison of XSB with other multi-threaded Prolog systems and we compare our approach to concurrent tabling with parallel and distributed methods for the evaluation of tabling. Finally, we identify future research directions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the past years, Software Architecture has attracted increased attention by academia and industry as the unifying concept to structure the design of complex systems. One particular research area deals with the possibility of reconfiguring architectures to adapt the systems they describe to new requirements. Reconfiguration amounts to adding and removing components and connections, and may have to occur without stopping the execution of the system being reconfigured. This work contributes to the formal description of such a process. Taking as a premise that a single formalism hardly ever satisfies all requirements in every situation, we present three approaches, each one with its own assumptions about the systems it can be applied to and with different advantages and disadvantages. Each approach is based on work of other researchers and has the aesthetic concern of changing as little as possible the original formalism, keeping its spirit. The first approach shows how a given reconfiguration can be specified in the same manner as the system it is applied to and in a way to be efficiently executed. The second approach explores the Chemical Abstract Machine, a formalism for rewriting multisets of terms, to describe architectures, computations, and reconfigurations in a uniform way. The last approach uses a UNITY-like parallel programming design language to describe computations, represents architectures by diagrams in the sense of Category Theory, and specifies reconfigurations by graph transformation rules.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Gestão e Empreendedorismo

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of scheduling a multi-mode real-time system upon identical multiprocessor platforms. Since it is a multi-mode system, the system can change from one mode to another such that the current task set is replaced with a new task set. Ensuring that deadlines are met requires not only that a schedulability test is performed on tasks in each mode but also that (i) a protocol for transitioning from one mode to another is specified and (ii) a schedulability test for each transition is performed. We propose two protocols which ensure that all the expected requirements are met during every transition between every pair of operating modes of the system. Moreover, we prove the correctness of our proposed algorithms by extending the theory about the makespan determination problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The foresight and scenario building methods can be an interesting reference for social sciences, especially in terms of innovative methods for labour process analysis. A scenario – as a central concept for the prospective analysis – can be considered as a rich and detailed portrait of a plausible future world. It can be a useful tool for policy-makers to grasp problems clearly and comprehensively, and to better pinpoint challenges as well as opportunities in an overall framework. The features of the foresight methods are being used in some labour policy making experiences. Case studies developed in Portugal will be presented, and some conclusions will be drawn in order to organise a set of principles for foresight analysis applied to the European project WORKS on the work organisation re-structuring in the knowledge society, and on the work design methods for new management structures of virtual organisations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT - Jean Cocteau, French cinema auteur avant la lettre, has consecrated his uniqueness to the defense of the “poet” and the promotion of its artistic ideals, before the French Nouvelle Vague inspired the break away from the filmic tradition and ahead of the eulogistic tendency to consider the director the undisputed creative entity of the filmmaking process. The Orphic trilogy expresses Cocteau’s cinematic philosophy in action. In other words, it reveals the way by which the creative entity affirms itself as the major filmic enunciator, through an allegorical relationship with vision. Therefore, Cocteau’s self-reflexive metacinema conjoins, in a fertile attunement, the starting point and the ultimate goal, the creation and the reception. Without being exactly a cinema about the cinema, this artistic practice is, nonetheless, very much with the cinema, feeding as it does on its essence. The films Le Sang d’un poète (“The Blood of a Poet”, 1932), Orphée (“Orpheus”, 1950) and Le Testament d’Orphée, ou ne me demandez pas pourquoi! (“Testament of Orpheus”, 1960) recreate, in allegorical form, the double creative function: the look of the directing entity reflects the gaze of the observer, just as this one always restores the presence of the creator. In short: Cocteau’s films, more than anyone else’s, deliberately reflect its auteur as enunciator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The discovery of X-rays was undoubtedly one of the greatest stimulus for improving the efficiency in the provision of healthcare services. The ability to view, non-invasively, inside the human body has greatly facilitated the work of professionals in diagnosis of diseases. The exclusive focus on image quality (IQ), without understanding how they are obtained, affect negatively the efficiency in diagnostic radiology. The equilibrium between the benefits and the risks are often forgotten. It is necessary to adopt optimization strategies to maximize the benefits (image quality) and minimize risk (dose to the patient) in radiological facilities. In radiology, the implementation of optimization strategies involves an understanding of images acquisition process. When a radiographer adopts a certain value of a parameter (tube potential [kVp], tube current-exposure time product [mAs] or additional filtration), it is essential to know its meaning and impact of their variation in dose and image quality. Without this, any optimization strategy will be a failure. Worldwide, data show that use of x-rays has been increasingly frequent. In Cabo Verde, we note an effort by healthcare institutions (e.g. Ministry of Health) in equipping radiological facilities and the recent installation of a telemedicine system requires purchase of new radiological equipment. In addition, the transition from screen-films to digital systems is characterized by a raise in patient exposure. Given that this transition is slower in less developed countries, as is the case of Cabo Verde, the need to adopt optimization strategies becomes increasingly necessary. This study was conducted as an attempt to answer that need. Although this work is about objective evaluation of image quality, and in medical practice the evaluation is usually subjective (visual evaluation of images by radiographer / radiologist), studies reported a correlation between these two types of evaluation (objective and subjective) [5-7] which accredits for conducting such studies. The purpose of this study is to evaluate the effect of exposure parameters (kVp and mAs) when using additional Cooper (Cu) filtration in dose and image quality in a Computed Radiography system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to evaluate the influence of the crushing process used to obtain recycled concrete aggregates on the performance of concrete made with those aggregates. Two crushing methods were considered: primary crushing, using a jaw crusher, and primary plus secondary crushing (PSC), using a jaw crusher followed by a hammer mill. Besides natural aggregates (NA), these two processes were also used to crush three types of concrete made in laboratory (L20, L45 e L65) and three more others from the precast industry (P20, P45 e P65). The coarse natural aggregates were totally replaced by coarse recycled concrete aggregates. The recycled aggregates concrete mixes were compared with reference concrete mixes made using only NA, and the following properties related to the mechanical and durability performance were tested: compressive strength; splitting tensile strength; modulus of elasticity; carbonation resistance; chloride penetration resistance; water absorption by capillarity; water absorption by immersion; and shrinkage. The results show that the PSC process leads to better performances, especially in the durability properties. © 2014 RILEM

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the current complexity of communication protocols, implementing its layers totally in the kernel of the operating system is too cumbersome, and it does not allow use of the capabilities only available in user space processes. However, building protocols as user space processes must not impair the responsiveness of the communication. Therefore, in this paper we present a layer of a communication protocol, which, due to its complexity, was implemented in a user space process. Lower layers of the protocol are, for responsiveness issues, implemented in the kernel. This protocol was developed to support large-scale power-line communication (PLC) with timing requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider the problem of sharing a wireless channel between a set of computer nodes. Hidden nodes exist and there is no base station. Each computer node hosts a set of sporadic message streams where a message stream releases messages with real-time deadlines. We propose a collision-free wireless medium access control (MAC) protocol which implements staticpriority scheduling. The MAC protocol allows multiple masters and is fully distributed. It neither relies on synchronized clocks nor out-of-band signaling; it is an adaptation to a wireless channel of the dominance protocol used in the CAN bus. But unlike that protocol, our protocol does not require a node having the ability to receive an incoming bit from the channel while transmitting to the channel. Our protocol has the key feature of not only being prioritized and collision-free but also dealing successfully with hidden nodes. This key feature enables schedulability analysis of sporadic message streams in multihop networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Different heating systems have been used in pultrusion, where the most widely used heaters are planar resistances. The primary objective of this study was to develop an improved heating system and compare its performance with that of a system with planar resistances. In this study, thermography was used to better understand the temperature profile along the die. Finite element analysis was performed to determine the amount of energy consumed by the heating systems. Improvements were made to the die to test the new heating system, and it was found that the new system reduced the setup time and energy consumption by approximately 57%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia