864 resultados para Multihop routing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless Communication is a trend in the industrial environment nowadays and on this trend, we can highlight the WirelessHART technology. In this situation, it is natural the search for new improvements in the technology and such improvements can be related directly to the routing and scheduling algorithms. In the present thesis, we present a literature review about the main specific solutions for Routing and scheduling for WirelessHART. The thesis also proposes a new scheduling algorithm called Flow Scheduling that intends to improve superframe utilization and flexibility aspects. For validation purposes, we develop a simulation module for the Network Simulator 3 (NS-3) that models aspects like positioning, signal attenuation and energy consumption and provides an link individual error configuration. The module also allows the creation of the scheduling superframe using the Flow and Han Algorithms. In order to validate the new algorithms, we execute a series of comparative tests and evaluate the algorithms performance for link allocation, delay and superframe occupation. In order to validate the physical layer of the simulation module, we statically configure the routing and scheduling aspects and perform reliability and energy consumption tests using various literature topologies and error probabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless Communication is a trend in the industrial environment nowadays and on this trend, we can highlight the WirelessHART technology. In this situation, it is natural the search for new improvements in the technology and such improvements can be related directly to the routing and scheduling algorithms. In the present thesis, we present a literature review about the main specific solutions for Routing and scheduling for WirelessHART. The thesis also proposes a new scheduling algorithm called Flow Scheduling that intends to improve superframe utilization and flexibility aspects. For validation purposes, we develop a simulation module for the Network Simulator 3 (NS-3) that models aspects like positioning, signal attenuation and energy consumption and provides an link individual error configuration. The module also allows the creation of the scheduling superframe using the Flow and Han Algorithms. In order to validate the new algorithms, we execute a series of comparative tests and evaluate the algorithms performance for link allocation, delay and superframe occupation. In order to validate the physical layer of the simulation module, we statically configure the routing and scheduling aspects and perform reliability and energy consumption tests using various literature topologies and error probabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The substantial increase in the number of applications offered through the computer networks, as well as in the volume of traffic forwarded through the network, have hampered to assure adequate service level to users. The Quality of Service (QoS) offer, honoring specified parameters in Service Level Agreements (SLA), established between the service providers and their clients, composes a traditional and extensive computer networks’ research area. Several schemes proposals for the provision of QoS were presented in the last three decades, but the acting scope of these proposals is always limited due to some factors, including the limited development of the network hardware and software, generally belonging to a single manufacturer. The advent of Software Defined Networking (SDN), along with the maturation of its main materialization, the OpenFlow protocol, allowed the decoupling between network hardware and software, through an architecture which provides a control plane and a data plane. This eases the computer networks scenario, allowing that new abstractions are applied in the hardware composing the data plane, through the development of new software pieces which are executed in the control plane. This dissertation investigates the QoS offer through the use and extension of the SDN architecture. Based on the proposal of two new modules, one to perform the data plane monitoring, SDNMon, and the second, MP-ROUTING, developed to determine the use of multiple paths in the forwarding of data referring to a flow, we demonstrated in this work that some QoS metrics specified in the SLAs, such as bandwidth, can be honored. Both modules were implemented and evaluated through a prototype. The evaluation results referring to several aspects of both proposed modules are presented in this dissertation, showing the obtained accuracy of the monitoring module SDNMon and the QoS gains due to the utilization of multiple paths defined by the MP-Routing, when forwarding data flow through the SDN.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current and future applications pose new requirements that Internet architecture is not able to satisfy, like Mobility, Multicast, Multihoming, Bandwidth Guarantee and so on. The Internet architecture has some limitations which do not allow all future requirements to be covered. New architectures were proposed considering these requirements when a communication is established. ETArch (Entity Title Architecture) is a new Internet architecture, clean slate, able to use application’s requirements on each communication, and flexible to work with several layers. The Routing has an important role on Internet, because it decides the best way to forward primitives through the network. In Future Internet, all requirements depend on the routing. Routing is responsible for deciding the best path and, in the future, a better route can consider Mobility aspects or Energy Consumption, for instance. In the dawn of ETArch, the Routing has not been defined. This work provides intra and inter-domain routing algorithms to be used in the ETArch. It is considered that the route should be defined completely before the data start to traffic, to ensure that the requirements are met. In the Internet, the Routing has two distinct functions: (i) run specific algorithms to define the best route; and (ii) to forward data primitives to the correct link. In traditional Internet architecture, the two Routing functions are performed in all routers everytime that a packet arrives. This work allows that the complete route is defined before the communication starts, like in the telecommunication systems. This work determined the Routing for ETArch and experiments were performed to demonstrate the control plane routing viability. The initial setup before a communication takes longer, then only forwarding of primitives is performed, saving processing time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a framework for estimating the quality of transmission (QoT) of a new lightpath before it is established, as well as for calculating the expected degradation it will cause to existing lightpaths. The framework correlates the QoT metrics of established lightpaths, which are readily available from coherent optical receivers that can be extended to serve as optical performance monitors. Past similar studies used only space (routing) information and thus neglected spectrum, while they focused on oldgeneration noncoherent networks. The proposed framework accounts for correlation in both the space and spectrum domains and can be applied to both fixed-grid wavelength division multiplexing (WDM) and elastic optical networks. It is based on a graph transformation that exposes and models the interference between spectrum-neighboring channels. Our results indicate that our QoT estimates are very close to the actual performance data, that is, to having perfect knowledge of the physical layer. The proposed estimation framework is shown to provide up to 4 × 10-2 lower pre-forward error correction bit error ratio (BER) compared to theworst-case interference scenario,which overestimates the BER. The higher accuracy can be harvested when lightpaths are provisioned with low margins; our results showed up to 47% reduction in required regenerators, a substantial savings in equipment cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

North American freshwater runoff records have been used to support the case that climate flickers were caused by shutdowns of the ocean thermohaline circulation (THC) resulting from reversals of meltwater discharges. Inconsistencies in the documentation of these meltwater switches, however, continue to fuel the debate on the cause/s of the oscillatory nature of the deglacial climate. New oxygen and carbon isotope records from the northern Gulf of Mexico depict in exceptional detail the succession of meltwater floods and pauses through the southern routing during the interval 16 to 8.9 ka (14C years BP; ka, kiloannum). The records underscore the bimodal role played by the Gulf of Mexico as a destination of meltwater discharges from the receding Laurentide Ice Sheet. The evidence indicates that the Gulf of Mexico acted as the principal source of superfloods at 13.4, 12.6, and 11.9 ka that reached the North Atlantic and contributed significantly to density stratification, disruption of ocean ventilation, and cold reversals. Gulf of Mexico lapsed into a "relief valve" position in post-Younger Dryas time, when meltwater discharges were rerouted south at 9.9, 9.7, 9.4, and 9.1 ka, thus temporarily interrupting North Atlantic-bound freshwater discharges from Lake Agassiz. The history of meltwater events in the Gulf of Mexico contradicts the model that meltwater flow via the eastern outlets into the North Atlantic disrupted the ocean THC, causing cooling, while diversions to the Gulf of Mexico via the Mississippi River enhanced THC and warming.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coring during Integrated Ocean Drilling Program Expeditions 315, 316, and 333 recovered turbiditic sands from the forearc Kumano Basin (Site C0002), a Quaternary slope basin (Site C0018), and uplifted trench wedge (Site C0006) along the Kumano Transect of the Nankai Trough accretionary wedge offshore of southwest Japan. The compositions of the submarine turbiditic sands here are investigated in terms of bulk and heavy mineral modal compositions to identify their provenance and dispersal mechanisms, as they may reflect changes in regional tectonics during the past ca. 1.5 Myrs. The results show a marked change in the detrital signature and heavy mineral composition in the forearc and slope basin facies around 1 Ma. This sudden change is interpreted to reflect a major change in the sand provenance, rather than heavy mineral dissolution and/or diagenetic effects, in response to changing tectonics and sedimentation patterns. In the trench-slope basin, the sands older than 1 Ma were probably eroded from the exposed Cretaceous-Tertiary accretionary complex of the Shimanto Belt and transported via the former course of the Tenryu submarine canyon system, which today enters the Nankai Trough northeast of the study area. In contrast, the high abundance of volcanic lithics and volcanic heavy mineral suites of the sands younger than 1 Ma points to a strong volcanic component of sediment derived from the Izu-Honshu collision zones and probably funnelled to this site through the Suruga Canyon. However, sands in the forearc basin show persistent presence of blue sodic amphiboles across the 1 Ma boundary, indicating continuous flux of sediments from the Kumano/Kinokawa River. This implies that the sands in the older turbidites were transported by transverse flow down the slope. The slope basin facies then switched to reflect longitudinal flow around 1 Ma, when the turbiditic sand tapped a volcanic provenance in the Izu-Honshu collision zone, while the sediments transported transversely became confined in the Kumano Basin. Therefore, the change in the depositional systems around 1 Ma is a manifestation of the decoupling of the sediment routing pattern from transverse to long-distance axial flow in response to forearc high uplift along the megasplay fault.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advances in low power micro-processors, wireless networks and embedded systems have raised the need to utilize the significant resources of mobile devices. These devices for example, smart phones, tablets, laptops, wearables, and sensors are gaining enormous processing power, storage capacity and wireless bandwidth. In addition, the advancement in wireless mobile technology has created a new communication paradigm via which a wireless network can be created without any priori infrastructure called mobile ad hoc network (MANET). While progress is being made towards improving the efficiencies of mobile devices and reliability of wireless mobile networks, the mobile technology is continuously facing the challenges of un-predictable disconnections, dynamic mobility and the heterogeneity of routing protocols. Hence, the traditional wired, wireless routing protocols are not suitable for MANET due to its unique dynamic ad hoc nature. Due to the reason, the research community has developed and is busy developing protocols for routing in MANET to cope with the challenges of MANET. However, there are no single generic ad hoc routing protocols available so far, which can address all the basic challenges of MANET as mentioned before. Thus this diverse range of ever growing routing protocols has created barriers for mobile nodes of different MANET taxonomies to intercommunicate and hence wasting a huge amount of valuable resources. To provide interaction between heterogeneous MANETs, the routing protocols require conversion of packets, meta-model and their behavioural capabilities. Here, the fundamental challenge is to understand the packet level message format, meta-model and behaviour of different routing protocols, which are significantly different for different MANET Taxonomies. To overcome the above mentioned issues, this thesis proposes an Interoperable Framework for heterogeneous MANETs called IF-MANET. The framework hides the complexities of heterogeneous routing protocols and provides a homogeneous layer for seamless communication between these routing protocols. The framework creates a unique Ontology for MANET routing protocols and a Message Translator to semantically compare the packets and generates the missing fields using the rules defined in the Ontology. Hence, the translation between an existing as well as newly arriving routing protocols will be achieved dynamically and on-the-fly. To discover a route for the delivery of packets across heterogeneous MANET taxonomies, the IF-MANET creates a special Gateway node to provide cluster based inter-domain routing. The IF-MANET framework can be used to develop different middleware applications. For example: Mobile grid computing that could potentially utilise huge amounts of aggregated data collected from heterogeneous mobile devices. Disaster & crises management applications can be created to provide on-the-fly infrastructure-less emergency communication across organisations by utilising different MANET taxonomies.