861 resultados para Multi-agent simulation and artificial snow optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focuses on improving computer network management by the adoption of artificial intelligence techniques. A logical inference system has being devised to enable automated isolation, diagnosis, and even repair of network problems, thus enhancing the reliability, performance, and security of networks. We propose a distributed multi-agent architecture for network management, where a logical reasoner acts as an external managing entity capable of directing, coordinating, and stimulating actions in an active management architecture. The active networks technology represents the lower level layer which makes possible the deployment of code which implement teleo-reactive agents, distributed across the whole network. We adopt the Situation Calculus to define a network model and the Reactive Golog language to implement the logical reasoner. An active network management architecture is used by the reasoner to inject and execute operational tasks in the network. The integrated system collects the advantages coming from logical reasoning and network programmability, and provides a powerful system capable of performing high-level management tasks in order to deal with network fault.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a prototype model based on a wireless sensor actuator network (WSAN) aimed at optimizing both energy consumption of environmental systems and well-being of occupants in buildings. The model is a system consisting of the following components: a wireless sensor network, `sense diaries', environmental systems such as heating, ventilation and air-conditioning systems, and a central computer. A multi-agent system (MAS) is used to derive and act on the preferences of the occupants. Each occupant is represented by a personal agent in the MAS. The sense diary is a new device designed to elicit feedback from occupants about their satisfaction with the environment. The roles of the components are: the WSAN collects data about physical parameters such as temperature and humidity from an indoor environment; the central computer processes the collected data; the sense diaries leverage trade-offs between energy consumption and well-being, in conjunction with the agent system; and the environmental systems control the indoor environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How can a bridge be built between autonomic computing approaches and parallel computing systems? How can autonomic computing approaches be extended towards building reliable systems? How can existing technologies be merged to provide a solution for self-managing systems? The work reported in this paper aims to answer these questions by proposing Swarm-Array Computing, a novel technique inspired from swarm robotics and built on the foundations of autonomic and parallel computing paradigms. Two approaches based on intelligent cores and intelligent agents are proposed to achieve autonomy in parallel computing systems. The feasibility of the proposed approaches is validated on a multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work a hybrid technique that includes probabilistic and optimization based methods is presented. The method is applied, both in simulation and by means of real-time experiments, to the heating unit of a Heating, Ventilation Air Conditioning (HVAC) system. It is shown that the addition of the probabilistic approach improves the fault diagnosis accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve autonomy for distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the enablers for new consumer electronics based products to be accepted in to the market is the availability of inexpensive, flexible and multi-standard chipsets and services. DVB-T, the principal standard for terrestrial broadcast of digital video in Europe, has been extremely successful in leading to governments reconsidering their targets for analogue television broadcast switch-off. To enable one further small step in creating increasingly cost effective chipsets, the ODFM deterministic equalizer has been presented before with its application to DVB-T. This paper discusses the test set-up of a DVB-T compliant baseband simulation that includes the deterministic equalizer and DVB-T standard propagation channels. This is then followed by a presentation of the found inner and outer Bit Error Rate (BER) results using various modulation levels, coding rates and propagation channels in order to ascertain the actual performance of the deterministic equalizer(1).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work reported in this paper proposes ‘Intelligent Agents’, a Swarm-Array computing approach focused to apply autonomic computing concepts to parallel computing systems and build reliable systems for space applications. Swarm-array computing is a robotics a swarm robotics inspired novel computing approach considered as a path to achieve autonomy in parallel computing systems. In the intelligent agent approach, a task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and can be seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-* objectives of autonomic computing. The approach is validated on a multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How can a bridge be built between autonomic computing approaches and parallel computing systems? The work reported in this paper is motivated towards bridging this gap by proposing a swarm-array computing approach based on ‘Intelligent Agents’ to achieve autonomy for distributed parallel computing systems. In the proposed approach, a task to be executed on parallel computing cores is carried onto a computing core by carrier agents that can seamlessly transfer between processing cores in the event of a predicted failure. The cognitive capabilities of the carrier agents on a parallel processing core serves in achieving the self-ware objectives of autonomic computing, hence applying autonomic computing concepts for the benefit of parallel computing systems. The feasibility of the proposed approach is validated by simulation studies using a multi-agent simulator on an FPGA (Field-Programmable Gate Array) and experimental studies using MPI (Message Passing Interface) on a computer cluster. Preliminary results confirm that applying autonomic computing principles to parallel computing systems is beneficial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Business and IT alignment is increasingly acknowledged as a key for organisational performance. However, alignment research lack to mechanisms that enable for on-going process with multi-level effects. Multi-level learning allows on-going effectiveness through development of the organisation and improved quality of business and IT strategies. In particular, exploration and exploitation enable effective process of alignment across dynamic multi-level of learning. Hence, this paper proposes a conceptual framework that links multi-level learning and business-IT strategy through the concept of exploration and exploitation, which considers short-term and long-term alignment together to address the challenges of strategic alignment faced in sustaining organisational performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cold equatorial SST bias in the tropical Pacific that is persistent in many coupled OAGCMs severely impacts the fidelity of the simulated climate and variability in this key region, such as the ENSO phenomenon. The classical bias analysis in these models usually concentrates on multi-decadal to centennial time series needed to obtain statistically robust features. Yet, this strategy cannot fully explain how the models errors were generated in the first place. Here, we use seasonal re-forecasts (hindcasts) to track back the origin of this cold bias. As such hindcasts are initialized close to observations, the transient drift leading to the cold bias can be analyzed to distinguish pre-existing errors from errors responding to initial ones. A time sequence of processes involved in the advent of the final mean state errors can then be proposed. We apply this strategy to the ENSEMBLES-FP6 project multi-model hindcasts of the last decades. Four of the five AOGCMs develop a persistent equatorial cold tongue bias within a few months. The associated systematic errors are first assessed separately for the warm and cold ENSO phases. We find that the models are able to reproduce either El Niño or La Niña close to observations, but not both. ENSO composites then show that the spurious equatorial cooling is maximum for El Niño years for the February and August start dates. For these events and at this time of the year, zonal wind errors in the equatorial Pacific are present from the beginning of the simulation and are hypothesized to be at the origin of the equatorial cold bias, generating too strong upwelling conditions. The systematic underestimation of the mixed layer depth in several models can also amplify the growth of the SST bias. The seminal role of these zonal wind errors is further demonstrated by carrying out ocean-only experiments forced by the AOCGCMs daily 10-meter wind. In a case study, we show that for several models, this forcing is sufficient to reproduce the main SST error patterns seen after 1 month in the AOCGCM hindcasts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we will address the endeavors of three disciplines, Psychology, Neuroscience, and Artificial Neural Network (ANN) modeling, in explaining how the mind perceives and attends information. More precisely, we will shed some light on the efforts to understand the allocation of attentional resources to the processing of emotional stimuli. This review aims at informing the three disciplines about converging points of their research and to provide a starting point for discussion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Garfield produces a critique of neo-minimalist art practice by demonstrating how the artist Melanie Jackson’s Some things you are not allowed to send around the world (2003 and 2006) and the experimental film-maker Vivienne Dick’s Liberty’s booty (1980) – neither of which can be said to be about feeling ‘at home’ in the world, be it as a resident or as a nomad – examine global humanity through multi-positionality, excess and contingency, and thereby begin to articulate a new cosmopolitan relationship with the local – or, rather, with many different localities – in one and the same maximalist sweep of the work. ‘Maximalism’ in Garfield’s coinage signifies an excessive overloading (through editing, collage, and the sheer density of the range of the material) that enables the viewer to insert themselves into the narrative of the work. In the art of both Jackson and Dick Garfield detects a refusal to know or to judge the world; instead, there is an attempt to incorporate the complexities of its full range into the singular vision of the work, challenging the viewer to identify what is at stake.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new model has been developed for assessing multiple sources of nitrogen in catchments. The model (INCA) is process based and uses reaction kinetic equations to simulate the principal mechanisms operating. The model allows for plant uptake, surface and sub-surface pathways and can simulate up to six land uses simultaneously. The model can be applied to catchment as a semi-distributed simulation and has an inbuilt multi-reach structure for river systems. Sources of nitrogen can be from atmospheric deposition, from the terrestrial environment (e.g. agriculture, leakage from forest systems etc.), from urban areas or from direct discharges via sewage or intensive farm units. The model is a daily simulation model and can provide information in the form of time series at key sites, or as profiles down river systems or as statistical distributions. The process model is described and in a companion paper the model is applied to the River Tywi catchment in South Wales and the Great Ouse in Bedfordshire.