854 resultados para hijacking the event
Resumo:
The availability of a network strongly depends on the frequency of service outages and the recovery time for each outage. The loss of network resources includes complete or partial failure of hardware and software components, power outages, scheduled maintenance such as software and hardware, operational errors such as configuration errors and acts of nature such as floods, tornadoes and earthquakes. This paper proposes a practical approach to the enhancement of QoS routing by means of providing alternative or repair paths in the event of a breakage of a working path. The proposed scheme guarantees that every Protected Node (PN) is connected to a multi-repair path such that no further failure or breakage of single or double repair paths can cause any simultaneous loss of connectivity between an ingress node and an egress node. Links to be protected in an MPLS network are predefined and an LSP request involves the establishment of a working path. The use of multi-protection paths permits the formation of numerous protection paths allowing greater flexibility. Our analysis will examine several methods including single, double and multi-repair routes and the prioritization of signals along the protected paths to improve the Quality of Service (QoS), throughput, reduce the cost of the protection path placement, delay, congestion and collision.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing system? The work reported in this paper is motivated towards bridging this gap by proposing swarm-array computing, a novel technique to achieve autonomy for distributed parallel computing systems. Among three proposed approaches, the second approach, namely 'Intelligent Agents' is of focus in this paper. The task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier. agents and can be seamlessly transferred between cores in the event of a pre-dicted failure, thereby achieving self-ware objectives of autonomic computing. The feasibility of the proposed approach is validated on a multi-agent simulator.
Resumo:
The work reported in this paper proposes 'Intelligent Agents', a Swarm-Array computing approach focused to apply autonomic computing concepts to parallel computing systems and build reliable systems for space applications. Swarm-array computing is a robotics a swarm robotics inspired novel computing approach considered as a path to achieve autonomy in parallel computing systems. In the intelligent agent approach, a task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and can be seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-* objectives of autonomic computing. The approach is validated on a multi-agent simulator.
Resumo:
The work reported in this paper is motivated by the fact that there is a need to apply autonomic computing concepts to parallel computing systems. Advancing on prior work based on intelligent cores [36], a swarm-array computing approach, this paper focuses on ‘Intelligent agents’ another swarm-array computing approach in which the task to be executed on a parallel computing core is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and is seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-ware objectives of autonomic computing. The feasibility of the proposed swarm-array computing approach is validated on a multi-agent simulator.
Resumo:
The work reported in this paper proposes ‘Intelligent Agents’, a Swarm-Array computing approach focused to apply autonomic computing concepts to parallel computing systems and build reliable systems for space applications. Swarm-array computing is a robotics a swarm robotics inspired novel computing approach considered as a path to achieve autonomy in parallel computing systems. In the intelligent agent approach, a task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and can be seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-* objectives of autonomic computing. The approach is validated on a multi-agent simulator.
Resumo:
The state of health and safety on construction sites in Ghana was investigated using first hand observation of fourteen (14) construction project sites in 2009 and 2010. At each site, the construction project, workers and the physical environment of the site were inspected and evaluated against health and safety indicators taken from the literature. The results reveal a poor state of health and safety on Ghanaian construction sites. The primary reasons are a lack of strong institutional framework for governing construction activities and poor enforcement of health and safety policies and procedures. Also, Ghanaian society does not place a high premium on health and safety of construction workers on site. Interviews with workers indicated that injuries and accidents are common on sites. However, compensation for injury is often at the discretion of the contractor although collective bargaining agreements between Labour unions and employers prescribe obligations for the contractor in the event of injury to a worker.
Resumo:
The work reported in this paper is motivated towards handling single node failures for parallel summation algorithms in computer clusters. An agent based approach is proposed in which a task to be executed is decomposed to sub-tasks and mapped onto agents that traverse computing nodes. The agents intercommunicate across computing nodes to share information during the event of a predicted node failure. Two single node failure scenarios are considered. The Message Passing Interface is employed for implementing the proposed approach. Quantitative results obtained from experiments reveal that the agent based approach can handle failures more efficiently than traditional failure handling approaches.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing systems? The work reported in this paper is motivated towards bridging this gap by proposing a swarm-array computing approach based on ‘Intelligent Agents’ to achieve autonomy for distributed parallel computing systems. In the proposed approach, a task to be executed on parallel computing cores is carried onto a computing core by carrier agents that can seamlessly transfer between processing cores in the event of a predicted failure. The cognitive capabilities of the carrier agents on a parallel processing core serves in achieving the self-ware objectives of autonomic computing, hence applying autonomic computing concepts for the benefit of parallel computing systems. The feasibility of the proposed approach is validated by simulation studies using a multi-agent simulator on an FPGA (Field-Programmable Gate Array) and experimental studies using MPI (Message Passing Interface) on a computer cluster. Preliminary results confirm that applying autonomic computing principles to parallel computing systems is beneficial.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator, and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
High-resolution satellite radar observations of erupting volcanoes can yield valuable information on rapidly changing deposits and geomorphology. Using the TerraSAR-X (TSX) radar with a spatial resolution of about 2 m and a repeat interval of 11-days, we show how a variety of techniques were used to record some of the eruptive history of the Soufriere Hills Volcano, Montserrat between July 2008 and February 2010. After a 15-month pause in lava dome growth, a vulcanian explosion occurred on 28 July 2008 whose vent was hidden by dense cloud. We were able to show the civil authorities using TSX change difference images that this explosion had not disrupted the dome sufficient to warrant continued evacuation. Change difference images also proved to be valuable in mapping new pyroclastic flow deposits: the valley-occupying block-and-ash component tending to increase backscatter and the marginal surge deposits reducing it, with the pattern reversing after the event. By comparing east- and west-looking images acquired 12 hours apart, the deposition of some individual pyroclastic flows can be inferred from change differences. Some of the narrow upper sections of valleys draining the volcano received many tens of metres of rockfall and pyroclastic flow deposits over periods of a few weeks. By measuring the changing shadows cast by these valleys in TSX images the changing depth of infill by deposits could be estimated. In addition to using the amplitude data from the radar images we also used their phase information within the InSAR technique to calculate the topography during a period of no surface activity. This enabled areas of transient topography, crucial for directing future flows, to be captured.
Resumo:
This study presents the findings of applying a Discrete Demand Side Control (DDSC) approach to the space heating of two case study buildings. High and low tolerance scenarios are implemented on the space heating controller to assess the impact of DDSC upon buildings with different thermal capacitances, light-weight and heavy-weight construction. Space heating is provided by an electric heat pump powered from a wind turbine, with a back-up electrical network connection in the event of insufficient wind being available when a demand occurs. Findings highlight that thermal comfort is maintained within an acceptable range while the DDSC controller maintains the demand/supply balance. Whilst it is noted that energy demand increases slightly, as this is mostly supplied from the wind turbine, this is of little significance and hence a reduction in operating costs and carbon emissions is still attained.
Resumo:
This paper proposes a practical approach to the enhancement of Quality of Service (QoS) routing by means of providing alternative or repair paths in the event of a breakage of a working path. The proposed scheme guarantees that every Protected Node (PN) is connected to a multi-repair path such that no further failure or breakage of single or double repair paths can cause any simultaneous loss of connectivity between an ingress node and an egress node. Links to be protected in an MPLS network are predefined and a Label Switched path (LSP) request involves the establishment of a working path. The use of multi-protection paths permits the formation of numerous protection paths allowing greater flexibility. Our analysis examined several methods including single, double and multi-repair routes and the prioritization of signals along the protected paths to improve the Quality of Service (QoS), throughput, reduce the cost of the protection path placement, delay, congestion and collision. Results obtained indicated that creating multi-repair paths and prioritizing packets reduces delay and increases throughput in which case the delays at the ingress/egress LSPs were low compared to when the signals had not been classified. Therefore the proposed scheme provided a means to improve the QoS in path restoration in MPLS using available network resources. Prioritizing the packets in the data plane has revealed that the amount of traffic transmitted using a medium and low priority Label Switch Paths (LSPs) does not have any impact on the explicit rate of the high priority LSP in which case the problem of a knock-on effect is eliminated.
Resumo:
Prêt-à-Médiatiser by House of POLLYFIBRE is a film made from diverse footage of a performance. The live event takes the fashion show catwalk as a site for exploration, with a focus on the dialogue between liveness and mediatisation. The audience and press are invited to document the event and the subsequent film is made using footage collated from the crew, the audience and the official press. RAPID PULSE International Performance Festival presents international, national, and local artists who were invited or selected from an international call for proposals. The curatorial committee consisted of Julie Laffin, Steven Bridges, Giana Gambino, And Joseph Ravens. The festival includes live gallery performances, public performances, a video series, social events, artist talks and panel discussions.