974 resultados para distributed environment
Resumo:
Virtual and augmented reality (VR/AR) are increasingly being used in various business scenarios and are important driving forces in technology development. However the usage of these technologies in the home environment is restricted due to several factors including lack of low-cost (from the client point of view) highperformance solutions. In this paper we present a general client/server rendering architecture based on Real-Time concepts, including support for a wide range of client platforms and applications. The idea of focusing on the real-time behaviour of all components involved in distributed IP-based VR scenarios is new and has not been addressed before, except for simple sub-solutions. This is considered as “the most significant problem with the IP environment” [1]. Thus, the most important contribution of this research will be the holistic approach, in which networking, end-systems and rendering aspects are integrated into a cost-effective infrastructure for building distributed real-time VR applications on IP-based networks.
Resumo:
This work focuses on highly dynamic distributed systems with Quality of Service (QoS) constraints (most importantly real-time constraints). To that purpose, real-time applications may benefit from code offloading techniques, so that parts of the application can be offloaded and executed, as services, by neighbour nodes, which are willing to cooperate in such computations. These applications explicitly state their QoS requirements, which are translated into resource requirements, in order to evaluate the feasibility of accepting other applications in the system.
Resumo:
This paper presents the distributed environment for virtual and/or real experiments for underwater robots (DEVRE). This environment is composed of a set of processes running on a local area network composed of three sites: 1) the onboard AUV computer; 2) a surface computer used as human-machine interface (HMI); and 3) a computer used for simulating the vehicle dynamics and representing the virtual world. The HMI can be transparently linked to the real sensors and actuators dealing with a real mission. It can also be linked with virtual sensors and virtual actuators, dealing with a virtual mission. The aim of DEVRE is to assist engineers during the software development and testing in the lab prior to real experiments
Resumo:
This paper presents the distributed environment for virtual and/or real experiments for underwater robots (DEVRE). This environment is composed of a set of processes running on a local area network composed of three sites: 1) the onboard AUV computer; 2) a surface computer used as human-machine interface (HMI); and 3) a computer used for simulating the vehicle dynamics and representing the virtual world. The HMI can be transparently linked to the real sensors and actuators dealing with a real mission. It can also be linked with virtual sensors and virtual actuators, dealing with a virtual mission. The aim of DEVRE is to assist engineers during the software development and testing in the lab prior to real experiments
Resumo:
Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.
Resumo:
This paper presents a distributed model predictive control (DMPC) for indoor thermal comfort that simultaneously optimizes the consumption of a limited shared energy resource. The control objective of each subsystem is to minimize the heating/cooling energy cost while maintaining the indoor temperature and used power inside bounds. In a distributed coordinated environment, the control uses multiple dynamically decoupled agents (one for each subsystem/house) aiming to achieve satisfaction of coupling constraints. According to the hourly power demand profile, each house assigns a priority level that indicates how much is willing to bid in auction for consume the limited clean resource. This procedure allows the bidding value vary hourly and consequently, the agents order to access to the clean energy also varies. Despite of power constraints, all houses have also thermal comfort constraints that must be fulfilled. The system is simulated with several houses in a distributed environment.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
The objective of the thesis is to enhance the understanding about the management of the front end phases of the innovation process in a networked environment. The thesis approaches the front end of innovation from three perspectives, including the strategy, processes and systems of innovation. The purpose of the use of different perspectives in the thesis is that of providing an extensive systemic view of the front end, and uncovering the complex nature of innovation management. The context of the research is the networked operating environment of firms. The unit of analysis is the firm itself or its innovation processes, which means that this research approaches the innovation networks from the point of view of a firm. The strategy perspective of the thesis emphasises the importance of purposeful innovation management, the innovation strategy of firms. The role of innovation processes is critical in carrying out innovation strategies in practice, supporting the development of organizational routines for innovation, and driving the strategic renewal of companies. The primary focus of the thesis from systems perspective is on idea management systems, which are defined as a part of innovation management systems, and defined for this thesis as any working combination of methodology and tools (manual or IT-supported) that enhance the management of innovations within their early phases. The main contribution of the thesis are the managerial frameworks developed for managing the front end of innovation, which purposefully “wire” the front end of innovation into the strategy and business processes of a firm. The thesis contributes to modern innovation management by connecting the internal and external collaboration networks as foundational elements for successful management of the early phases of innovation processes in a dynamic environment. The innovation capability of a firm is largely defined by its ability to rely on and make use of internal and external collaboration already during the front end activities, which by definition include opportunity identification and analysis, idea generation, profileration and selection, and concept definition. More specifically, coordination of the interfaces between these activities, and between the internal and external innovation environments of a firm is emphasised. The role of information systems, in particular idea management systems, is to support and delineate the innovation-oriented behaviour and interaction of individuals and organizations during front end activities. The findings and frameworks developed in the thesis can be used by companies for purposeful promotion of their front end processes. The thesis provides a systemic strategy framework for managing the front end of innovation – not as a separate process, but as an elemental bundle ofactivities that is closely linked to the overall innovation process and strategy of a firm in a distributed environment. The theoretical contribution of the thesis relies on the advancement of the open innovation paradigm in the strategic context of a firm within its internal and external innovation environments. This thesis applies the constructive research approach and case study methodology to provide theoretically significant results, which are also practically beneficial.
Resumo:
Usually, a Petri net is applied as an RFID model tool. This paper, otherwise, presents another approach to the Petri net concerning RFID systems. This approach, called elementary Petri net inside an RFID distributed database, or PNRD, is the first step to improve RFID and control systems integration, based on a formal data structure to identify and update the product state in real-time process execution, allowing automatic discovery of unexpected events during tag data capture. There are two main features in this approach: to use RFID tags as the object process expected database and last product state identification; and to apply Petri net analysis to automatically update the last product state registry during reader data capture. RFID reader data capture can be viewed, in Petri nets, as a direct analysis of locality for a specific transition that holds in a specific workflow. Following this direction, RFID readers storage Petri net control vector list related to each tag id is expected to be perceived. This paper presents PNRD cornerstones and a PNRD implementation example in software called DEMIS Distributed Environment in Manufacturing Information Systems.
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, their distributed and massively parallel nature obtains polynomial solutions opposite to traditional non-polynomial ones. At this point, it is very important to develop dedicated hardware and software implementations exploiting those two membrane systems features. Dealing with distributed implementations of P systems, the bottleneck communication problem has arisen. When the number of membranes grows up, the network gets congested. The purpose of distributed architectures is to reach a compromise between the massively parallel character of the system and the needed evolution step time to transit from one configuration of the system to the next one, solving the bottleneck communication problem. The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors.
Resumo:
The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Also, the number of processors grows in such a way that is notorious the increment of the parallelism in the application of the evolution rules and the internal communica-tionsstudy because it gets an increment of the parallelism in the application of the evolution rules and the internal communications. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors
Resumo:
Adaptability for distributed object-oriented enterprise frameworks is a critical mission for system evolution. Today, building adaptive services is a complex task due to lack of adequate framework support in the distributed computing environment. In this thesis, we propose a Meta Level Component-Based Framework (MELC) which uses distributed computing design patterns as components to develop an adaptable pattern-oriented framework for distributed computing applications. We describe our novel approach of combining a meta architecture with a pattern-oriented framework, resulting in an adaptable framework which provides a mechanism to facilitate system evolution. The critical nature of distributed technologies requires frameworks to be adaptable. Our framework employs a meta architecture. It supports dynamic adaptation of feasible design decisions in the framework design space by specifying and coordinating meta-objects that represent various aspects within the distributed environment. The meta architecture in MELC framework can provide the adaptability for system evolution. This approach resolves the problem of dynamic adaptation in the framework, which is encountered in most distributed applications. The concept of using a meta architecture to produce an adaptable pattern-oriented framework for distributed computing applications is new and has not previously been explored in research. As the framework is adaptable, the proposed architecture of the pattern-oriented framework has the abilities to dynamically adapt new design patterns to address technical system issues in the domain of distributed computing and they can be woven together to shape the framework in future. We show how MELC can be used effectively to enable dynamic component integration and to separate system functionality from business functionality. We demonstrate how MELC provides an adaptable and dynamic run time environment using our system configuration and management utility. We also highlight how MELC will impose significant adaptability in system evolution through a prototype E-Bookshop application to assemble its business functions with distributed computing components at the meta level in MELC architecture. Our performance tests show that MELC does not entail prohibitive performance tradeoffs. The work to develop the MELC framework for distributed computing applications has emerged as a promising way to meet current and future challenges in the distributed environment.
Resumo:
Component-based development (CBD) has become an important emerging topic in the software engineering field. It promises long-sought-after benefits such as increased software reuse, reduced development time to market and, hence, reduced software production cost. Despite the huge potential, the lack of reasoning support and development environment of component modeling and verification may hinder its development. Methods and tools that can support component model analysis are highly appreciated by industry. Such a tool support should be fully automated as well as efficient. At the same time, the reasoning tool should scale up well as it may need to handle hundreds or even thousands of components that a modern software system may have. Furthermore, a distributed environment that can effectively manage and compose components is also desirable. In this paper, we present an approach to the modeling and verification of a newly proposed component model using Semantic Web languages and their reasoning tools. We use the Web Ontology Language and the Semantic Web Rule Language to precisely capture the inter-relationships and constraints among the entities in a component model. Semantic Web reasoning tools are deployed to perform automated analysis support of the component models. Moreover, we also proposed a service-oriented architecture (SOA)-based semantic web environment for CBD. The adoption of Semantic Web services and SOA make our component environment more reusable, scalable, dynamic and adaptive.