871 resultados para Ad hoc network


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead ofbeing another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. There are many kinds of protocols that work over WMNs, such as IEEE 802.11a/b/g, 802.15 and 802.16. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. While transmission rate is a significant part, only a few algorithms such as Auto Rate Fallback (ARF) or Receiver Based Auto Rate (RBAR) have been published. In this paper we will show MAC, packet loss and physical layer conditions play important role for having good channel condition. Also we perform rate adaption along with multiple packet transmission for better throughput. By allowing for dynamically monitored, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria improvements in performance can be obtained. The proposed method is the detection of channel congestion by measuring the fluctuation of signal to the standard deviation of and the detection of packet loss before channel performance diminishes. We will show that the use of such techniques in WMN can significantly improve performance. The effectiveness of the proposed method is presented in an experimental wireless network testbed via packet-level simulation. Our simulation results show that regardless of the channel condition we were to improve the performance in the throughput.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead of being another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. Several protocols that work over WMNs include IEEE 802.11a/b/g, 802.15, 802.16 and LTE-Advanced. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. In this paper, we have proposed a scheme to improve channel conditions by performing rate adaptation along with multiple packet transmission using packet loss and physical layer condition. Dynamic monitoring, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria provided greater throughput. The key feature of the proposed method is the combination of the following two factors: 1) detection of intrinsic channel conditions by measuring the fluctuation of noise to signal ratio via the standard deviation, and 2) the detection of packet loss induced through congestion. We have shown that the use of such techniques in a WMN can significantly improve performance in terms of the packet sending rate. The effectiveness of the proposed method was demonstrated in a simulated wireless network testbed via packet-level simulation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In wireless ad hoc sensor networks, energy use is in many cases the most important constraint since it corresponds directly to operational lifetime. Topology management schemes such as GAF put the redundant nodes for routing to sleep in order to save the energy. The radio range will affect the number of neighbouring nodes, which collaborate to forward data to a base station or sink. In this paper we study a simple linear network and deduce the relationship between optimal radio range and traffic. We find that half of the power can be saved if the radio range is adjusted appropriately compared with the best case where equal radio ranges are used.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Medium access control (MAC) protocols have a large impact on the achievable system performance for wireless ad hoc networks. Because of the limitations of existing analytical models for ad hoc networks, many researchers have opted to study the impact of MAC protocols via discreteevent simulations. However, as the network scenarios, traffic patterns and physical layer techniques may change significantly, simulation alone is not efficient to get insights into the impacts of MAC protocols on system performance. In this paper, we analyze the performance of IEEE 802.11 distributed coordination function (DCF) in multihop network scenario. We are particularly interested in understanding how physical layer techniques may affect the MAC protocol performance. For this purpose, the features of interference range is studied and taken into account of the analytical model. Simulations with OPNET show the effectiveness of the proposed analytical approach. Copyright 2005 ACM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead of being another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. Several protocols that work over WMNs include IEEE 802.11a/b/g, 802.15, 802.16 and LTE-Advanced. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. This paper proposes a scheme to improve channel conditions by performing rate adaptation along with multiple packet transmission using packet loss and physical layer condition. Dynamic monitoring, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria provided greater throughput. The key feature of the proposed method is the combination of the following two factors: 1) detection of intrinsic channel conditions by measuring the fluctuation of noise to signal ratio via the standard deviation, and 2) the detection of packet loss induced through congestion. The authors show that the use of such techniques in a WMN can significantly improve performance in terms of the packet sending rate. The effectiveness of the proposed method was demonstrated in a simulated wireless network testbed via packet-level simulation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, wireless communication infrastructures have been widely deployed for both personal and business applications. IEEE 802.11 series Wireless Local Area Network (WLAN) standards attract lots of attention due to their low cost and high data rate. Wireless ad hoc networks which use IEEE 802.11 standards are one of hot spots of recent network research. Designing appropriate Media Access Control (MAC) layer protocols is one of the key issues for wireless ad hoc networks. ^ Existing wireless applications typically use omni-directional antennas. When using an omni-directional antenna, the gain of the antenna in all directions is the same. Due to the nature of the Distributed Coordination Function (DCF) mechanism of IEEE 802.11 standards, only one of the one-hop neighbors can send data at one time. Nodes other than the sender and the receiver must be either in idle or listening state, otherwise collisions could occur. The downside of the omni-directionality of antennas is that the spatial reuse ratio is low and the capacity of the network is considerably limited. ^ It is therefore obvious that the directional antenna has been introduced to improve spatial reutilization. As we know, a directional antenna has the following benefits. It can improve transport capacity by decreasing interference of a directional main lobe. It can increase coverage range due to a higher SINR (Signal Interference to Noise Ratio), i.e., with the same power consumption, better connectivity can be achieved. And the usage of power can be reduced, i.e., for the same coverage, a transmitter can reduce its power consumption. ^ To utilizing the advantages of directional antennas, we propose a relay-enabled MAC protocol. Two relay nodes are chosen to forward data when the channel condition of direct link from the sender to the receiver is poor. The two relay nodes can transfer data at the same time and a pipelined data transmission can be achieved by using directional antennas. The throughput can be improved significant when introducing the relay-enabled MAC protocol. ^ Besides the strong points, directional antennas also have some explicit drawbacks, such as the hidden terminal and deafness problems and the requirements of retaining location information for each node. Therefore, an omni-directional antenna should be used in some situations. The combination use of omni-directional and directional antennas leads to the problem of configuring heterogeneous antennas, i e., given a network topology and a traffic pattern, we need to find a tradeoff between using omni-directional and using directional antennas to obtain a better network performance over this configuration. ^ Directly and mathematically establishing the relationship between the network performance and the antenna configurations is extremely difficult, if not intractable. Therefore, in this research, we proposed several clustering-based methods to obtain approximate solutions for heterogeneous antennas configuration problem, which can improve network performance significantly. ^ Our proposed methods consist of two steps. The first step (i.e., clustering links) is to cluster the links into different groups based on the matrix-based system model. After being clustered, the links in the same group have similar neighborhood nodes and will use the same type of antenna. The second step (i.e., labeling links) is to decide the type of antenna for each group. For heterogeneous antennas, some groups of links will use directional antenna and others will adopt omni-directional antenna. Experiments are conducted to compare the proposed methods with existing methods. Experimental results demonstrate that our clustering-based methods can improve the network performance significantly. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research involves the design, development, and theoretical demonstration of models resulting in integrated misbehavior resolution protocols for ad hoc networked devices. Game theory was used to analyze strategic interaction among independent devices with conflicting interests. Packet forwarding at the routing layer of autonomous ad hoc networks was investigated. Unlike existing reputation based or payment schemes, this model is based on repeated interactions. To enforce cooperation, a community enforcement mechanism was used, whereby selfish nodes that drop packets were punished not only by the victim, but also by all nodes in the network. Then, a stochastic packet forwarding game strategy was introduced. Our solution relaxed the uniform traffic demand that was pervasive in other works. To address the concerns of imperfect private monitoring in resource aware ad hoc networks, a belief-free equilibrium scheme was developed that reduces the impact of noise in cooperation. This scheme also eliminated the need to infer the private history of other nodes. Moreover, it simplified the computation of an optimal strategy. The belief-free approach reduced the node overhead and was easily tractable. Hence it made the system operation feasible. Motivated by the versatile nature of evolutionary game theory, the assumption of a rational node is relaxed, leading to the development of a framework for mitigating routing selfishness and misbehavior in Multi hop networks. This is accomplished by setting nodes to play a fixed strategy rather than independently choosing a rational strategy. A range of simulations was carried out that showed improved cooperation between selfish nodes when compared to older results. Cooperation among ad hoc nodes can also protect a network from malicious attacks. In the absence of a central trusted entity, many security mechanisms and privacy protections require cooperation among ad hoc nodes to protect a network from malicious attacks. Therefore, using game theory and evolutionary game theory, a mathematical framework has been developed that explores trust mechanisms to achieve security in the network. This framework is one of the first steps towards the synthesis of an integrated solution that demonstrates that security solely depends on the initial trust level that nodes have for each other.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless sensor networks are emerging as effective tools in the gathering and dissemination of data. They can be applied in many fields including health, environmental monitoring, home automation and the military. Like all other computing systems it is necessary to include security features, so that security sensitive data traversing the network is protected. However, traditional security techniques cannot be applied to wireless sensor networks. This is due to the constraints of battery power, memory, and the computational capacities of the miniature wireless sensor nodes. Therefore, to address this need, it becomes necessary to develop new lightweight security protocols. This dissertation focuses on designing a suite of lightweight trust-based security mechanisms and a cooperation enforcement protocol for wireless sensor networks. This dissertation presents a trust-based cluster head election mechanism used to elect new cluster heads. This solution prevents a major security breach against the routing protocol, namely, the election of malicious or compromised cluster heads. This dissertation also describes a location-aware, trust-based, compromise node detection, and isolation mechanism. Both of these mechanisms rely on the ability of a node to monitor its neighbors. Using neighbor monitoring techniques, the nodes are able to determine their neighbors’ reputation and trust level through probabilistic modeling. The mechanisms were designed to mitigate internal attacks within wireless sensor networks. The feasibility of the approach is demonstrated through extensive simulations. The dissertation also addresses non-cooperation problems in multi-user wireless sensor networks. A scalable lightweight enforcement algorithm using evolutionary game theory is also designed. The effectiveness of this cooperation enforcement algorithm is validated through mathematical analysis and simulation. This research has advanced the knowledge of wireless sensor network security and cooperation by developing new techniques based on mathematical models. By doing this, we have enabled others to build on our work towards the creation of highly trusted wireless sensor networks. This would facilitate its full utilization in many fields ranging from civilian to military applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La diffusione di soluzioni domotiche dipende da tecnologie abilitanti che supportino la comunicazione tra i numerosi agenti delle reti. L’obiettivo della tesi è progettare e realizzare un middleware per sensori distribuiti Java-based chiamato SensorNetwork, che permetta ad un agente domotico di effettuare sensing sull’ambiente. Le funzionalità principali del sistema sono uniformità di accesso a sensori eterogenei distribuiti, alto livello di automazione (autoconfigurazione e autodiscovery dei nodi), configurazione a deployment time, modularità, semplicità di utilizzo ed estensione con nuovi sensori. Il sistema realizzato è basato su un’architettura a componente-container che permette l’utilizzo di sensori all’interno di stazioni di sensori e che supporti l’accesso remoto per mezzo di un servizio di naming definito ad-hoc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical.

In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications.

We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the distribution of dipoles in a given sample of chromophore constructs. The model can be used to evaluate the feasibility of other potential orientation control techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Advertising investment and audience figures indicate that television continues to lead as a mass advertising medium. However, its effectiveness is questioned due to problems such as zapping, saturation and audience fragmentation. This has favoured the development of non-conventional advertising formats. This study provides empirical evidence for the theoretical development. This investigation analyzes the recall generated by four non-conventional advertising formats in a real environment: short programme (branded content), television sponsorship, internal and external telepromotion versus the more conventional spot. The methodology employed has integrated secondary data with primary data from computer assisted telephone interviewing (CATI) were performed ad-hoc on a sample of 2000 individuals, aged 16 to 65, representative of the total television audience. Our findings show that non-conventional advertising formats are more effective at a cognitive level, as they generate higher levels of both unaided and aided recall, in all analyzed formats when compared to the spot.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent paradigms in wireless communication architectures describe environments where nodes present a highly dynamic behavior (e.g., User Centric Networks). In such environments, routing is still performed based on the regular packet-switched behavior of store-and-forward. Albeit sufficient to compute at least an adequate path between a source and a destination, such routing behavior cannot adequately sustain the highly nomadic lifestyle that Internet users are today experiencing. This thesis aims to analyse the impact of the nodes’ mobility on routing scenarios. It also aims at the development of forwarding concepts that help in message forwarding across graphs where nodes exhibit human mobility patterns, as is the case of most of the user-centric wireless networks today. The first part of the work involved the analysis of the mobility impact on routing, and we found that node mobility significance can affect routing performance, and it depends on the link length, distance, and mobility patterns of nodes. The study of current mobility parameters showed that they capture mobility partially. The routing protocol robustness to node mobility depends on the routing metric sensitivity to node mobility. As such, mobility-aware routing metrics were devised to increase routing robustness to node mobility. Two categories of routing metrics proposed are the time-based and spatial correlation-based. For the validation of the metrics, several mobility models were used, which include the ones that mimic human mobility patterns. The metrics were implemented using the Network Simulator tool using two widely used multi-hop routing protocols of Optimized Link State Routing (OLSR) and Ad hoc On Demand Distance Vector (AODV). Using the proposed metrics, we reduced the path re-computation frequency compared to the benchmark metric. This means that more stable nodes were used to route data. The time-based routing metrics generally performed well across the different node mobility scenarios used. We also noted a variation on the performance of the metrics, including the benchmark metric, under different mobility models, due to the differences in the node mobility governing rules of the models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the dramatic growth of text information, there is an increasing need for powerful text mining systems that can automatically discover useful knowledge from text. Text is generally associated with all kinds of contextual information. Those contexts can be explicit, such as the time and the location where a blog article is written, and the author(s) of a biomedical publication, or implicit, such as the positive or negative sentiment that an author had when she wrote a product review; there may also be complex context such as the social network of the authors. Many applications require analysis of topic patterns over different contexts. For instance, analysis of search logs in the context of the user can reveal how we can improve the quality of a search engine by optimizing the search results according to particular users; analysis of customer reviews in the context of positive and negative sentiments can help the user summarize public opinions about a product; analysis of blogs or scientific publications in the context of a social network can facilitate discovery of more meaningful topical communities. Since context information significantly affects the choices of topics and language made by authors, in general, it is very important to incorporate it into analyzing and mining text data. In general, modeling the context in text, discovering contextual patterns of language units and topics from text, a general task which we refer to as Contextual Text Mining, has widespread applications in text mining. In this thesis, we provide a novel and systematic study of contextual text mining, which is a new paradigm of text mining treating context information as the ``first-class citizen.'' We formally define the problem of contextual text mining and its basic tasks, and propose a general framework for contextual text mining based on generative modeling of text. This conceptual framework provides general guidance on text mining problems with context information and can be instantiated into many real tasks, including the general problem of contextual topic analysis. We formally present a functional framework for contextual topic analysis, with a general contextual topic model and its various versions, which can effectively solve the text mining problems in a lot of real world applications. We further introduce general components of contextual topic analysis, by adding priors to contextual topic models to incorporate prior knowledge, regularizing contextual topic models with dependency structure of context, and postprocessing contextual patterns to extract refined patterns. The refinements on the general contextual topic model naturally lead to a variety of probabilistic models which incorporate different types of context and various assumptions and constraints. These special versions of the contextual topic model are proved effective in a variety of real applications involving topics and explicit contexts, implicit contexts, and complex contexts. We then introduce a postprocessing procedure for contextual patterns, by generating meaningful labels for multinomial context models. This method provides a general way to interpret text mining results for real users. By applying contextual text mining in the ``context'' of other text information management tasks, including ad hoc text retrieval and web search, we further prove the effectiveness of contextual text mining techniques in a quantitative way with large scale datasets. The framework of contextual text mining not only unifies many explorations of text analysis with context information, but also opens up many new possibilities for future research directions in text mining.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A collaboration between dot.rural at the University of Aberdeen and the iSchool at Northumbria University, POWkist is a pilot-study exploring potential usages of currently available linked datasets within the cultural heritage domain. Many privately-held family history collections (shoebox archives) remain vulnerable unless a sustainable, affordable and accessible model of citizen-archivist digital preservation can be offered. Citizen-historians have used the web as a platform to preserve cultural heritage, however with no accessible or sustainable model these digital footprints have been ad hoc and rarely connected to broader historical research. Similarly, current approaches to connecting material on the web by exploiting linked datasets do not take into account the data characteristics of the cultural heritage domain. Funded by Semantic Media, the POWKist project is investigating how best to capture, curate, connect and present the contents of citizen-historians’ shoebox archives in an accessible and sustainable online collection. Using the Curios platform - an open-source digital archive - we have digitised a collection relating to a prisoner of war during WWII (1939-1945). Following a series of user group workshops, POWkist is now connecting these ‘made digital’ items with the broader web using a semantic technology model and identifying appropriate linked datasets of relevant content such as DBPedia (an archived linked dataset of Wikipedia) and Ordnance Survey Open Data. We are analysing the characteristics of cultural heritage linked datasets, so that these materials are better visualised, contextualised and presented in an attractive and comprehensive user interface. Our paper will consider the issues we have identified, the solutions we are developing and include a demonstration of our work-in-progress.