824 resultados para Distributed computer systems


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Architecture, Engineering, Construction and Facilities Management (AEC/FM) industry is rapidly becoming a multidisciplinary, multinational and multi-billion dollar economy, involving large numbers of actors working concurrently at different locations and using heterogeneous software and hardware technologies. Since the beginning of the last decade, a great deal of effort has been spent within the field of construction IT in order to integrate data and information from most computer tools used to carry out engineering projects. For this purpose, a number of integration models have been developed, like web-centric systems and construction project modeling, a useful approach in representing construction projects and integrating data from various civil engineering applications. In the modern, distributed and dynamic construction environment it is important to retrieve and exchange information from different sources and in different data formats in order to improve the processes supported by these systems. Previous research demonstrated that a major hurdle in AEC/FM data integration in such systems is caused by its variety of data types and that a significant part of the data is stored in semi-structured or unstructured formats. Therefore, new integrative approaches are needed to handle non-structured data types like images and text files. This research is focused on the integration of construction site images. These images are a significant part of the construction documentation with thousands stored in site photographs logs of large scale projects. However, locating and identifying such data needed for the important decision making processes is a very hard and time-consuming task, while so far, there are no automated methods for associating them with other related objects. Therefore, automated methods for the integration of construction images are important for construction information management. During this research, processes for retrieval, classification, and integration of construction images in AEC/FM model based systems have been explored. Specifically, a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval have been deployed in order to develop a methodology for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a pragmatic framework that has been developed for classifying and analyzing developments in distributed automation and information systems - especially those that have been labeled intelligent systems for different reasons. The framework dissects the different stages in the standard feedback process and assesses distribution in terms of the level of granularity of the organization that is being considered. The framework has been found to be useful in comparing and assessing different distributed industrial control paradigms and also for examining common features of different development projects - especially those that might be sourced from different sectors or domains. © 2012 IFAC.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper extends the authors' earlier work which adapted robust multiplexed MPC for application to distributed control of multi-agent systems with non-interacting dynamics and coupled constraint sets in the presence of persistent unknown, but bounded disturbances. Specifically, we propose exploiting the single agent update nature of the multiplexed approach, and fix the update sequence to enable input move-blocking and increased discretisation rates. This permits a higher rate of individual policy update to be achieved, whilst incurring no additional computational cost in the corresponding optimal control problems to be solved. A disturbance feedback policy is included between updates to facilitate finding feasible solutions. The new formulation inherits the property of rapid response to disturbances from multiplexing the control and numerical results show that fixing the update sequence does not incur any loss in performance. © 2011 IFAC.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents explicit solutions for a few distributed LQG problems in which players communicate their states with delays. The resulting control structure is reminiscent of a simple management hierarchy, in which a top level input is modified by newer, more localized information as it gets passed down the chain of command. It is hoped that the controller forms arising through optimization may lend insight into the control strategies of biological and social systems with communication delays. © 2011 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an insight into leather manufacturing processes, depicting peculiarities and challenges faced by leather industry. An analysis of this industry reveals the need for a new approach to optimize the productivity of leather processing operations, ensure consistent quality of leather, mitigate the adverse health effects in tannery workers exposed to chemicals and comply with environmental regulation. Holonic manufacturing systems (HMS) paradigm represent a bottom-up distributed approach that provides stability, adaptability, efficient use of resources and a plug and operate functionality to the manufacturing system. A vision of how HMS might operate in a tannery is illustrated presenting the rationales behind its application in this industry. © 2013 Springer-Verlag.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This chapter proposes a simple and pragmatic framework that has been developed for classifying and analyzing developments in distributed automation and information systems - especially those that have been labelled intelligent systems for different reasons. The framework dissects the different stages in the standard feedback process and assesses distribution in terms of the level of granularity of the organization that is being considered. The framework has been found to be useful in comparing and assessing different distributed industrial control paradigms and also for examining common features of different development projects - especially those that might be sourced from different sectors or domains. © Springer-Verlag Berlin Heidelberg 2013.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The miscibility and structure of A-B copolymer/C homopolymer blends with special interactions were studied by a Monte Carlo simulation in two dimensions. The interaction between segment A and segment C was repulsive, whereas it was attractive between segment B and segment C. In order to study the effect of copolymer chain structure on the morphology and structure of A-B copolymer/C homopolymer blends, the alternating, random and block A-B copolymers were introduced into the blends, respectively. The simulation results indicated that the miscibility of A-B block copolymer/C homopolymer blends depended on the chain structure of the A-B copolymer. Compared with alternating or random copolymer, the block copolymer, especially the diblock copolymer, could lead to a poor miscibility of A-B copolymer/C homopolymer blends. Moreover, for diblock A-B copolymer/C homopolymer blends, obvious self-organized core-shell structure was observed in the segment B composition region from 20% to 60%. However, if diblock copolymer composition in the blends is less than 40%, obvious self-organized core-shell structure could be formed in the B-segment component region from 10 to 90%. Furthermore, computer statistical analysis for the simulation results showed that the core sizes tended to increase continuously and their distribution became wider with decreasing B-segment component.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

ISBN: 3-540-76198-5 (out of print)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The proliferation of inexpensive workstations and networks has prompted several researchers to use such distributed systems for parallel computing. Attempts have been made to offer a shared-memory programming model on such distributed memory computers. Most systems provide a shared-memory that is coherent in that all processes that use it agree on the order of all memory events. This dissertation explores the possibility of a significant improvement in the performance of some applications when they use non-coherent memory. First, a new formal model to describe existing non-coherent memories is developed. I use this model to prove that certain problems can be solved using asynchronous iterative algorithms on shared-memory in which the coherence constraints are substantially relaxed. In the course of the development of the model I discovered a new type of non-coherent behavior called Local Consistency. Second, a programming model, Mermera, is proposed. It provides programmers with a choice of hierarchically related non-coherent behaviors along with one coherent behavior. Thus, one can trade-off the ease of programming with coherent memory for improved performance with non-coherent memory. As an example, I present a program to solve a linear system of equations using an asynchronous iterative algorithm. This program uses all the behaviors offered by Mermera. Third, I describe the implementation of Mermera on a BBN Butterfly TC2000 and on a network of workstations. The performance of a version of the equation solving program that uses all the behaviors of Mermera is compared with that of a version that uses coherent behavior only. For a system of 1000 equations the former exhibits at least a 5-fold improvement in convergence time over the latter. The version using coherent behavior only does not benefit from employing more than one workstation to solve the problem while the program using non-coherent behavior continues to achieve improved performance as the number of workstations is increased from 1 to 6. This measurement corroborates our belief that non-coherent shared memory can be a performance boon for some applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The proliferation of inexpensive workstations and networks has created a new era in distributed computing. At the same time, non-traditional applications such as computer-aided design (CAD), computer-aided software engineering (CASE), geographic-information systems (GIS), and office-information systems (OIS) have placed increased demands for high-performance transaction processing on database systems. The combination of these factors gives rise to significant challenges in the design of modern database systems. In this thesis, we propose novel techniques whose aim is to improve the performance and scalability of these new database systems. These techniques exploit client resources through client-based transaction management. Client-based transaction management is realized by providing logging facilities locally even when data is shared in a global environment. This thesis presents several recovery algorithms which utilize client disks for storing recovery related information (i.e., log records). Our algorithms work with both coarse and fine-granularity locking and they do not require the merging of client logs at any time. Moreover, our algorithms support fine-granularity locking with multiple clients permitted to concurrently update different portions of the same database page. The database state is recovered correctly when there is a complex crash as well as when the updates performed by different clients on a page are not present on the disk version of the page, even though some of the updating transactions have committed. This thesis also presents the implementation of the proposed algorithms in a memory-mapped storage manager as well as a detailed performance study of these algorithms using the OO1 database benchmark. The performance results show that client-based logging is superior to traditional server-based logging. This is because client-based logging is an effective way to reduce dependencies on server CPU and disk resources and, thus, prevents the server from becoming a performance bottleneck as quickly when the number of clients accessing the database increases.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We examine the question of whether to employ the first-come-first-served (FCFS) discipline or the processor-sharing (PS) discipline at the hosts in a distributed server system. We are interested in the case in which service times are drawn from a heavy-tailed distribution, and so have very high variability. Traditional wisdom when task sizes are highly variable would prefer the PS discipline, because it allows small tasks to avoid being delayed behind large tasks in a queue. However, we show that system performance can actually be significantly better under FCFS queueing, if each task is assigned to a host based on the task's size. By task assignment, we mean an algorithm that inspects incoming tasks and assigns them to hosts for service. The particular task assignment policy we propose is called SITA-E: Size Interval Task Assignment with Equal Load. Surprisingly, under SITA-E, FCFS queueing typically outperforms the PS discipline by a factor of about two, as measured by mean waiting time and mean slowdown (waiting time of task divided by its service time). We compare the FCFS/SITA-E policy to the processor-sharing case analytically; in addition we compare it to a number of other policies in simulation. We show that the benefits of SITA-E are present even in small-scale distributed systems (four or more hosts). Furthermore, SITA-E is a static policy that does not incorporate feedback knowledge of the state of the hosts, which allows for a simple and scalable implementation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present an online distributed algorithm, the Causation Logging Algorithm (CLA), in which Autonomous Systems (ASes) in the Internet individually report route oscillations/flaps they experience to a central Internet Routing Registry (IRR). The IRR aggregates these reports and may observe what we call causation chains where each node on the chain caused a route flap at the next node along the chain. A chain may also have a causation cycle. The type of an observed causation chain/cycle allows the IRR to infer the underlying policy routing configuration (i.e., the system of economic relationships and constraints on route/path preferences). Our algorithm is based on a formal policy routing model that captures the propagation dynamics of route flaps under arbitrary changes in topology or path preferences. We derive invariant properties of causation chains/cycles for ASes which conform to economic relationships based on the popular Gao-Rexford model. The Gao-Rexford model is known to be safe in the sense that the system always converges to a stable set of paths under static conditions. Our CLA algorithm recovers the type/property of an observed causation chain of an underlying system and determines whether it conforms to the safe economic Gao-Rexford model. Causes for nonconformity can be diagnosed by comparing the properties of the causation chains with those predicted from different variants of the Gao-Rexford model.