818 resultados para distributed simulation pads anonymity tor simulator anonymous cloud computing
Resumo:
The radiation budget simulated by the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-year reanalysis (ERA40) is evaluated for the period 1979–2001 using independent satellite data and additional model data. This provides information on the quality of the radiation products and indirect evaluation of other aspects of the climate produced by ERA40. The climatology of clear-sky outgoing longwave radiation (OLR) is well captured by ERA40. Underestimations of about 10 W m−2 in clear-sky OLR over tropical convective regions by ERA40 compared to satellite data are substantially reduced when the satellite sampling is taken into account. The climatology of column-integrated water vapor is well simulated by ERA40 compared to satellite data over the ocean, indicating that the simulation of downward clear-sky longwave fluxes at the surface is likely to be good. Clear-sky absorbed solar radiation (ASR) and clear-sky OLR are overestimated by ERA40 over north Africa and high-latitude land regions. The observed interannual changes in low-latitude means are not well reproduced. Using ERA40 to analyze trends and climate feedbacks globally is therefore not recommended. The all-sky radiation budget is poorly simulated by ERA40. OLR is overestimated by around 10 W m−2 over much of the globe. ASR is underestimated by around 30 W m−2 over tropical ocean regions. Away from marine stratocumulus regions, where cloud fraction is underestimated by ERA40, the poor radiation simulation by ERA40 appears to be related to inaccurate radiative properties of cloud rather than inaccurate cloud distributions.
Resumo:
Wireless local area networks (WLANs) have changed the way many of us communicate, work, play and live. Due to its popularity, dense deployments are becoming a norm in many cities around the world. However, increased interference and traffic demands can severely limit the aggregate throughput achievable if an effective channel assignment scheme is not used. In this paper, we propose an enhanced asynchronous distributed and dynamic channel assignment scheme that is simple to implement, does not require any knowledge of the throughput function, allows asynchronous channel switching by each access point (AP) and is superior in performance. Simulation results show that our proposed scheme converges much faster than previously reported synchronous schemes, with a reduction in convergence time and channel switches by tip to 73.8% and 30.0% respectively.
Resumo:
The popularity of wireless local area networks (WLANs) has resulted in their dense deployment in many cities around the world. The increased interference among different WLANs severely degrades the throughput achievable. This problem has been further exacerbated by the limited number of frequency channels available. An improved distributed and dynamic channel assignment scheme that is simple to implement and does not depend on the knowledge of the throughput function is proposed in this work. It also allows each access point (AP) to asynchronously switch to the new best channel. Simulation results show that our proposed scheme converges much faster than similar previously reported work, with a reduction in convergence time and channel switches as much as 77.3% and 52.3% respectively. When it is employed in dynamic environments, the throughput improves by up to 12.7%.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing systems? The work reported in this paper is motivated towards bridging this gap by proposing a swarm-array computing approach based on ‘Intelligent Agents’ to achieve autonomy for distributed parallel computing systems. In the proposed approach, a task to be executed on parallel computing cores is carried onto a computing core by carrier agents that can seamlessly transfer between processing cores in the event of a predicted failure. The cognitive capabilities of the carrier agents on a parallel processing core serves in achieving the self-ware objectives of autonomic computing, hence applying autonomic computing concepts for the benefit of parallel computing systems. The feasibility of the proposed approach is validated by simulation studies using a multi-agent simulator on an FPGA (Field-Programmable Gate Array) and experimental studies using MPI (Message Passing Interface) on a computer cluster. Preliminary results confirm that applying autonomic computing principles to parallel computing systems is beneficial.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator, and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
Dense deployments of wireless local area networks (WLANs) are becoming a norm in many cities around the world. However, increased interference and traffic demands can severely limit the aggregate throughput achievable unless an effective channel assignment scheme is used. In this work, a simple and effective distributed channel assignment (DCA) scheme is proposed. It is shown that in order to maximise throughput, each access point (AP) simply chooses the channel with the minimum number of active neighbour nodes (i.e. nodes associated with neighbouring APs that have packets to send). However, application of such a scheme to practice depends critically on its ability to estimate the number of neighbour nodes in each channel, for which no practical estimator has been proposed before. In view of this, an extended Kalman filter (EKF) estimator and an estimate of the number of nodes by AP are proposed. These not only provide fast and accurate estimates but can also exploit channel switching information of neighbouring APs. Extensive packet level simulation results show that the proposed minimum neighbour and EKF estimator (MINEK) scheme is highly scalable and can provide significant throughput improvement over other channel assignment schemes.
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasingly complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I) reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develops conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to building simulation scientists, initiates a dialogue and builds bridges between scientists and engineers, and stimulates future research about a wide range of issues on building environmental systems.
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasing complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I), published in the previous issue, reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develop conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to (1) building simulation scientists and designers (2) initiating a dialogue between scientists and engineers, and (3) stimulating future research on a wide range of issues involved in designing and managing building environmental systems.
Resumo:
Clusters of computers can be used together to provide a powerful computing resource. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take considerable time to execute on conventional workstations. By spreading the work of the simulation across a cluster of computers, the elapsed execution time can be greatly reduced. Thus a user has apparently the performance of a supercomputer by using the spare cycles on other workstations.
Resumo:
The authors discuss an implementation of an object oriented (OO) fault simulator and its use within an adaptive fault diagnostic system. The simulator models the flow of faults around a power network, reporting switchgear indications and protection messages that would be expected in a real fault scenario. The simulator has been used to train an adaptive fault diagnostic system; results and implications are discussed.
Resumo:
Research to date has tended to concentrate on bandwidth considerations to increase scalability in distributed interactive simulation and virtual reality systems. This paper proposes that the major concern for latency in user interaction is that of the fundamental limit of communication rate due to the speed of light. Causal volumes and surfaces are introduced as a model of the limitations of causality caused by this fundamental delay. The concept of virtual world critical speed is introduced, which can be determined from the causal surface. The implications of the critical speed are discussed, and relativistic dynamics are used to constrain the object speed, in the same way speeds are bounded in the real world.
Resumo:
The development of large scale virtual reality and simulation systems have been mostly driven by the DIS and HLA standards community. A number of issues are coming to light about the applicability of these standards, in their present state, to the support of general multi-user VR systems. This paper pinpoints four issues that must be readdressed before large scale virtual reality systems become accessible to a larger commercial and public domain: a reduction in the effects of network delays; scalable causal event delivery; update control; and scalable reliable communication. Each of these issues is tackled through a common theme of combining wall clock and causal time-related entity behaviour, knowledge of network delays and prediction of entity behaviour, that together overcome many of the effects of network delay.
Resumo:
The development of large scale virtual reality and simulation systems have been mostly driven by the DIS and HLA standards community. A number of issues are coming to light about the applicability of these standards, in their present state, to the support of general multi-user VR systems. This paper pinpoints four issues that must be readdressed before large scale virtual reality systems become accessible to a larger commercial and public domain: a reduction in the effects of network delays; scalable causal event delivery; update control; and scalable reliable communication. Each of these issues is tackled through a common theme of combining wall clock and causal time-related entity behaviour, knowledge of network delays and prediction of entity behaviour, that together overcome many of the effects of network delays.
Resumo:
User interaction within a virtual environment may take various forms: a teleconferencing application will require users to speak to each other (Geak, 1993), with computer supported co-operative working; an Engineer may wish to pass an object to another user for examination; in a battle field simulation (McDonough, 1992), users might exchange fire. In all cases it is necessary for the actions of one user to be presented to the others sufficiently quickly to allow realistic interaction. In this paper we take a fresh look at the approach of virtual reality operating systems by tackling the underlying issues of creating real-time multi-user environments.