24 resultados para wired best-effort networks


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we investigate the camera network placement problem for target coverage in manufacturing workplaces. The problem is formulated to find the minimum number of cameras of different types and their best configurations to maximise the coverage of the monitored workplace such that the given set of target points of interest are each k-covered with a predefined minimum spatial resolution. Since the problem is NP-complete, and even NP-hard to approximate, a novel method based on Simulated Annealing is presented to solve the optimisation problem. A new neighbourhood generation function is proposed to handle the discrete nature of the problem. The visual coverage is modelled using realistic and coherent assumptions of camera intrinsic and extrinsic parameters making it suitable for many real world camera based applications. Task-specific quality of coverage measure is proposed to assist selecting the best among the set of camera network placements with equal coverage. A 3D CAD of the monitored space is used to examine physical occlusions of target points. The results show the accuracy, efficiency and scalability of the presented solution method; which can be applied effectively in the design of practical camera networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maximum target coverage with minimum number of sensor nodes, known as an MCMS problem, is an important problem in directional sensor networks (DSNs). For guaranteed coverage and event reporting, the underlying mechanism must ensure that all targets are covered by the sensors and the resulting network is connected. Existing solutions allow individual sensor nodes to determine the sensing direction for maximum target coverage which produces sensing coverage redundancy and much overhead. Gathering nodes into clusters might provide a better solution to this problem. In this paper, we have designed distributed clustering and target coverage algorithms to address the problem in an energy-efficient way. To the best of our knowledge, this is the first work that exploits cluster heads to determine the active sensing nodes and their directions for solving target coverage problems in DSNs. Our extensive simulation study shows that our system outperforms a number of state-of-the-art approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Security of Wireless Sensor Network (WSN) is a key issue in information security. Most existing security protocols exploit various Mathematical tools to strengthen their security. Some protocols use the details of the geographical location of the nodes. However, to the best authors’ knowledge, none of the existing works exploit the constraints faced by the adversary, specifically, tracing a particular frequency from a large range of unknown frequency channels. The current work uses positional details of the individual nodes. Then the aim is to exploit this weakness of tracing frequencies by assigning a wide range of frequency channels to each node. Experiments using Magneto Optic Sensors reveal that any change of the parametric Faraday’s rotational angle affects the frequency of the Optical waves. This idea can perhaps be generalized for practically deployable sensors (having respective parameters) along with a suitable key management scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data aggregation in wireless sensor networks is employed to reduce the communication overhead and prolong the network lifetime. However, an adversary may compromise some sensor nodes, and use them to forge false values as the aggregation result. Previous secure data aggregation schemes have tackled this problem from different angles. The goal of those algorithms is to ensure that the Base Station (BS) does not accept any forged aggregation results. But none of them have tried to detect the nodes that inject into the network bogus aggregation results. Moreover, most of them usually have a communication overhead that is (at best) logarithmic per node. In this paper, we propose a secure and energy-efficient data aggregation scheme that can detect the malicious nodes with a constant per node communication overhead. In our solution, all aggregation results are signed with the private keys of the aggregators so that they cannot be altered by others. Nodes on each link additionally use their pairwise shared key for secure communications. Each node receives the aggregation results from its parent (sent by the parent of its parent) and its siblings (via its parent node), and verifies the aggregation result of the parent node. Theoretical analysis on energy consumption and communication overhead accords with our comparison based simulation study over random data aggregation trees.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Randomly scattered sensors may cause sensing holes and redundant sensors. In carrier-based sensor relocation, mobile robots (with limited capacity to carry sensors) pick up additional or redundant sensors and relocate them at sensing holes. In the only known localized algorithm, robots randomly traverse field and act based on identified pair of spare sensor and coverage hole. We propose a Market-based Sensor Relocation (MSR) algorithm, which optimizes sensor deployment location, and introduces bidding and coordinating among neighboring robots. Sensors along the boundary of each hole elect one of them as the representative, which bids to neighboring robots for hole filling service. Each robot randomly explores by applying Least Recently Visited policy. It chooses the best bid according to Cost over Progress ratio and fetches a spare sensor nearby to cover the corresponding sensing hole. Robots within communication range share their tasks to search for better possible solutions. Simulation shows that MSR outperforms the existing competing algorithm G-R3S2 significantly on total robot traversed path and energy, and time to cover holes, slightly on number of sensors needed to cover the hole and number of sensor messages for bidding and deployment location sharing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite significant advancements in wireless sensor networks (WSNs), energy conservation remains one of the most important research challenges. Recently, the problem of energy conservation has been addressed by applying mobile sink as an effective technique that can enhance efficiency of energy consumption in the networks. In this paper, the energy conservation problem is firstly formulated to maximize the lifetime of WSN subject to delay and node energy constraints. Then, to solve the defined energy conservation problem, a data collection scheduling with a mobile sink scheme is proposed. In the proposed approach, the sink movement is governed by a type-2 fuzzy controller to be located at the best location and time to collect sensory data. We conducted extensive experiments to study the effectiveness of the proposed protocol and compared it against the streaming data delivery (SDD) and virtual circle combined straight routing (VCCS) protocols. We observed that the proposed protocol outperforms both SDD and VCCS approaches by reducing energy consumption, minimize delays and enhance data collection quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

G20 outreach processes, in the form of the Think 20, Labour 20, Business 20, and Civil 20, Youth 20, and Women 20, are a formal attempt by G20 leaders to engage various social sectors with G20 policymaking. This essay contends that G20 outreach processes are best understood as transnational policy networks, which are involved in widening the field of policy communication and deliberation. The importance of these transnational policy networks rest upon their role in developing and disseminating G20 policy priorities and principles; and are an attempt to enhance the legitimacy and influence of the G20 and its policy proposals.

"We agree that, in order to strengthen its ability to build and sustain the political consensus needed to respond to challenges, the G20 must remain efficient, transparent and accountable. To achieve this, we decide to … pursue consistent and effective engagement with non-members, regional and international organisations, including the United Nations, and other actors, and we welcome their contribution to our work as appropriate. We also encourage engagement with civil society.G20 Cannes Summit Final Declaration 2011 (G20 2011)"

The difficulty in balancing the effectiveness and representativeness of the Group of Twenty (G20) has led to sustained questions about its legitimacy (Cooper 2010; Rudd 2011; Cooper and Pouliot 2015). Consequently, while leaders have long sought external advice about the agendas of Group of Seven (G7) summits since 1975, and about the G20 finance ministers and central bank governors’ meetings (G20 FM/CBG) since 1999, there has been intensification, elaboration, and institutionalization of transnational networks of policymakers with respect to the G20 in recent years. These networks are especially evident in the form of the G20 working groups and G20 outreach processes involved in the G20 FM/CBG and the G20 leaders’ forum created in 2008.

G20 working groups include transgovernmental groups of government officials and outside experts within a specific policy area who are charged with preparing material for G20 deliberations. G20 outreach processes are a recent and more formal attempt by G20 leaders to engage various social sectors with the policymaking activity of the G20 and were first considered by the G20 membership in 2010 with a more formal engagement with business interests. This led to the formal development of G20 outreach groups in 2013 in the form of the Think 20 (think tanks), Labour 20, Business 20, Civil 20 and Youth 20, which include representatives from these sectors. In 2015, a Women 20 outreach group was also added. These outreach processes are best understood as transnational policy networks which have been built to support the G20’s capacity to be effective and legitimate.

This essay focuses on G20 outreach processes and examines why and how the G20 has sought to augment its intergovernmental summitry and transgovernmental working groups with transnational policy networks, purposely involving a range of societal interests. Transnational policy networks demonstrate the existence of policymaking practices which include the policy influence of experts and advocates outside government. These networks also indicate the ways in which governments, International Governmental Organizations (IGOs) and summits like the G20 engage society, or where elements of society engage themselves with the policymaking process (Stone 2008). These networks intersect with the intergovernmental activities of leaders and key diplomats, and overlap with the transgovernmental relationships of various levels of government bureaucrats (Baker 2009). One of the principle features of transnational policy networks is the way they create and channel the communication of political ideas and priorities. However, it is important to keep in the mind the purpose and power of actors involved in the network and consider who has the discretion and motivation to create the network in the first instance. As the G20 members stated in 2012, the aspiration for outreach is founded upon an intent to strengthen the G20’s capacity “to build and sustain the political consensus”. Consequently, it is important to consider how the development of transnational policy networks in the form of G20 outreach processes are able to sustain the effectiveness and legitimacy of the G20.

This essay contends that G20 outreach processes are best understood as transnational policy networks. These networks have been built to widen the field of policy communication and deliberation. Furthermore, these outreach processes and networks are an attempt to enhance the legitimacy and influence of the G20 and its policy proposals. While there is no doubt that outreach practices are “ad hoc responses to the widespread charge that the G20 reproduces the politics of exclusion in global governance” (Cooper and Pouliot 2015, 347), these practices have the potential to improve both the effectiveness and legitimacy of the G20. The G20 possesses uncertain legitimacy and members of the G20 demonstrate an awareness of this and a corresponding willingness to actively develop various political practices to support the capacity and legitimacy of the G20.

However, G20 outreach also enables the G20 to place some limit upon the policy narratives and ideas that develop within these policy networks. The G20 is liable to be misunderstood without examining the activity of these transnational networks because the G20 is fundamentally a deliberative policy forum rather than a negotiating forum of binding regulations. Transnational policy networks have the potential to scrutinize and amplify relevant policy ideas and thereby enhance the legitimacy of the G20 and strengthen the capacity of the G20 to address an array of global economic and social problems. However, while some narrative control is important to amplify the G20 agenda, too much narrative control will undermine its legitimacy and capacity to develop broad-based responses to global problems. This essay explores the formation of these transnational policy networks by first outlining the evolution of the purpose and configuration of the G20, then it considers the ways G20 outreach processes constitute transnational policy networks and why they have been established, and lastly, analyses how these networks operate to enhance the legitimacy and effectiveness of the G20.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensor networks are a branch of distributed ad hoc networks with a broad range of applications in surveillance and environment monitoring. In these networks, message exchanges are carried out in a multi-hop manner. Due to resource constraints, security professionals often use lightweight protocols, which do not provide adequate security. Even in the absence of constraints, designing a foolproof set of protocols and codes is almost impossible. This leaves the door open to the worms that take advantage of the vulnerabilities to propagate via exploiting the multi-hop message exchange mechanism. This issue has drawn the attention of security researchers recently. In this paper, we investigate the propagation pattern of information in wireless sensor networks based on an extended theory of epidemiology. We develop a geographical susceptible-infective model for this purpose and analytically derive the dynamics of information propagation. Compared with the previous models, ours is more realistic and is distinguished by two key factors that had been neglected before: 1) the proposed model does not purely rely on epidemic theory but rather binds it with geometrical and spatial constraints of real-world sensor networks and 2) it extends to also model the spread dynamics of conflicting information (e.g., a worm and its patch). We do extensive simulations to show the accuracy of our model and compare it with the previous ones. The findings show the common intuition that the infection source is the best location to start patching from, which is not necessarily right. We show that this depends on many factors, including the time it takes for the patch to be developed, worm/patch characteristics as well as the shape of the network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In current data centers, an application (e.g., MapReduce, Dryad, search platform, etc.) usually generates a group of parallel flows to complete a job. These flows compose a coflow and only completing them all is meaningful to the application. Accordingly, minimizing the average Coflow Completion Time (CCT) becomes a critical objective of flow scheduling. However, achieving this goal in today's Data Center Networks (DCNs) is quite challenging, not only because the schedule problem is theoretically NP-hard, but also because it is tough to perform practical flow scheduling in large-scale DCNs. In this paper, we find that minimizing the average CCT of a set of coflows is equivalent to the well-known problem of minimizing the sum of completion times in a concurrent open shop. As there are abundant existing solutions for concurrent open shop, we open up a variety of techniques for coflow scheduling. Inspired by the best known result, we derive a 2-approximation algorithm for coflow scheduling, and further develop a decentralized coflow scheduling system, D-CAS, which avoids the system problems associated with current centralized proposals while addressing the performance challenges of decentralized suggestions. Trace-driven simulations indicate that D-CAS achieves a performance close to Varys, the state-of-the-art centralized method, and outperforms Baraat, the only existing decentralized method, significantly.