865 resultados para Vehicule routing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current and future applications pose new requirements that Internet architecture is not able to satisfy, like Mobility, Multicast, Multihoming, Bandwidth Guarantee and so on. The Internet architecture has some limitations which do not allow all future requirements to be covered. New architectures were proposed considering these requirements when a communication is established. ETArch (Entity Title Architecture) is a new Internet architecture, clean slate, able to use application’s requirements on each communication, and flexible to work with several layers. The Routing has an important role on Internet, because it decides the best way to forward primitives through the network. In Future Internet, all requirements depend on the routing. Routing is responsible for deciding the best path and, in the future, a better route can consider Mobility aspects or Energy Consumption, for instance. In the dawn of ETArch, the Routing has not been defined. This work provides intra and inter-domain routing algorithms to be used in the ETArch. It is considered that the route should be defined completely before the data start to traffic, to ensure that the requirements are met. In the Internet, the Routing has two distinct functions: (i) run specific algorithms to define the best route; and (ii) to forward data primitives to the correct link. In traditional Internet architecture, the two Routing functions are performed in all routers everytime that a packet arrives. This work allows that the complete route is defined before the communication starts, like in the telecommunication systems. This work determined the Routing for ETArch and experiments were performed to demonstrate the control plane routing viability. The initial setup before a communication takes longer, then only forwarding of primitives is performed, saving processing time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a framework for estimating the quality of transmission (QoT) of a new lightpath before it is established, as well as for calculating the expected degradation it will cause to existing lightpaths. The framework correlates the QoT metrics of established lightpaths, which are readily available from coherent optical receivers that can be extended to serve as optical performance monitors. Past similar studies used only space (routing) information and thus neglected spectrum, while they focused on oldgeneration noncoherent networks. The proposed framework accounts for correlation in both the space and spectrum domains and can be applied to both fixed-grid wavelength division multiplexing (WDM) and elastic optical networks. It is based on a graph transformation that exposes and models the interference between spectrum-neighboring channels. Our results indicate that our QoT estimates are very close to the actual performance data, that is, to having perfect knowledge of the physical layer. The proposed estimation framework is shown to provide up to 4 × 10-2 lower pre-forward error correction bit error ratio (BER) compared to theworst-case interference scenario,which overestimates the BER. The higher accuracy can be harvested when lightpaths are provisioned with low margins; our results showed up to 47% reduction in required regenerators, a substantial savings in equipment cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

North American freshwater runoff records have been used to support the case that climate flickers were caused by shutdowns of the ocean thermohaline circulation (THC) resulting from reversals of meltwater discharges. Inconsistencies in the documentation of these meltwater switches, however, continue to fuel the debate on the cause/s of the oscillatory nature of the deglacial climate. New oxygen and carbon isotope records from the northern Gulf of Mexico depict in exceptional detail the succession of meltwater floods and pauses through the southern routing during the interval 16 to 8.9 ka (14C years BP; ka, kiloannum). The records underscore the bimodal role played by the Gulf of Mexico as a destination of meltwater discharges from the receding Laurentide Ice Sheet. The evidence indicates that the Gulf of Mexico acted as the principal source of superfloods at 13.4, 12.6, and 11.9 ka that reached the North Atlantic and contributed significantly to density stratification, disruption of ocean ventilation, and cold reversals. Gulf of Mexico lapsed into a "relief valve" position in post-Younger Dryas time, when meltwater discharges were rerouted south at 9.9, 9.7, 9.4, and 9.1 ka, thus temporarily interrupting North Atlantic-bound freshwater discharges from Lake Agassiz. The history of meltwater events in the Gulf of Mexico contradicts the model that meltwater flow via the eastern outlets into the North Atlantic disrupted the ocean THC, causing cooling, while diversions to the Gulf of Mexico via the Mississippi River enhanced THC and warming.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coring during Integrated Ocean Drilling Program Expeditions 315, 316, and 333 recovered turbiditic sands from the forearc Kumano Basin (Site C0002), a Quaternary slope basin (Site C0018), and uplifted trench wedge (Site C0006) along the Kumano Transect of the Nankai Trough accretionary wedge offshore of southwest Japan. The compositions of the submarine turbiditic sands here are investigated in terms of bulk and heavy mineral modal compositions to identify their provenance and dispersal mechanisms, as they may reflect changes in regional tectonics during the past ca. 1.5 Myrs. The results show a marked change in the detrital signature and heavy mineral composition in the forearc and slope basin facies around 1 Ma. This sudden change is interpreted to reflect a major change in the sand provenance, rather than heavy mineral dissolution and/or diagenetic effects, in response to changing tectonics and sedimentation patterns. In the trench-slope basin, the sands older than 1 Ma were probably eroded from the exposed Cretaceous-Tertiary accretionary complex of the Shimanto Belt and transported via the former course of the Tenryu submarine canyon system, which today enters the Nankai Trough northeast of the study area. In contrast, the high abundance of volcanic lithics and volcanic heavy mineral suites of the sands younger than 1 Ma points to a strong volcanic component of sediment derived from the Izu-Honshu collision zones and probably funnelled to this site through the Suruga Canyon. However, sands in the forearc basin show persistent presence of blue sodic amphiboles across the 1 Ma boundary, indicating continuous flux of sediments from the Kumano/Kinokawa River. This implies that the sands in the older turbidites were transported by transverse flow down the slope. The slope basin facies then switched to reflect longitudinal flow around 1 Ma, when the turbiditic sand tapped a volcanic provenance in the Izu-Honshu collision zone, while the sediments transported transversely became confined in the Kumano Basin. Therefore, the change in the depositional systems around 1 Ma is a manifestation of the decoupling of the sediment routing pattern from transverse to long-distance axial flow in response to forearc high uplift along the megasplay fault.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advances in low power micro-processors, wireless networks and embedded systems have raised the need to utilize the significant resources of mobile devices. These devices for example, smart phones, tablets, laptops, wearables, and sensors are gaining enormous processing power, storage capacity and wireless bandwidth. In addition, the advancement in wireless mobile technology has created a new communication paradigm via which a wireless network can be created without any priori infrastructure called mobile ad hoc network (MANET). While progress is being made towards improving the efficiencies of mobile devices and reliability of wireless mobile networks, the mobile technology is continuously facing the challenges of un-predictable disconnections, dynamic mobility and the heterogeneity of routing protocols. Hence, the traditional wired, wireless routing protocols are not suitable for MANET due to its unique dynamic ad hoc nature. Due to the reason, the research community has developed and is busy developing protocols for routing in MANET to cope with the challenges of MANET. However, there are no single generic ad hoc routing protocols available so far, which can address all the basic challenges of MANET as mentioned before. Thus this diverse range of ever growing routing protocols has created barriers for mobile nodes of different MANET taxonomies to intercommunicate and hence wasting a huge amount of valuable resources. To provide interaction between heterogeneous MANETs, the routing protocols require conversion of packets, meta-model and their behavioural capabilities. Here, the fundamental challenge is to understand the packet level message format, meta-model and behaviour of different routing protocols, which are significantly different for different MANET Taxonomies. To overcome the above mentioned issues, this thesis proposes an Interoperable Framework for heterogeneous MANETs called IF-MANET. The framework hides the complexities of heterogeneous routing protocols and provides a homogeneous layer for seamless communication between these routing protocols. The framework creates a unique Ontology for MANET routing protocols and a Message Translator to semantically compare the packets and generates the missing fields using the rules defined in the Ontology. Hence, the translation between an existing as well as newly arriving routing protocols will be achieved dynamically and on-the-fly. To discover a route for the delivery of packets across heterogeneous MANET taxonomies, the IF-MANET creates a special Gateway node to provide cluster based inter-domain routing. The IF-MANET framework can be used to develop different middleware applications. For example: Mobile grid computing that could potentially utilise huge amounts of aggregated data collected from heterogeneous mobile devices. Disaster & crises management applications can be created to provide on-the-fly infrastructure-less emergency communication across organisations by utilising different MANET taxonomies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently honeycomb meshes have been considered as alternative candidates for interconnection networks in parallel and distributed computer systems. This paper presents a solution to one of the open problems about honeycomb meshes—the so-called three disjoint path problem. The problem requires minimizing the length of the longest of any three disjoint paths between 3-degree nodes. This solution provides information on the re-routing of traffic along the network in the presence of faults.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a JPEG-2000 compliant architecture capable of computing the 2 -D Inverse Discrete Wavelet Transform. The proposed architecture uses a single processor and a row-based schedule to minimize control and routing complexity and to ensure that processor utilization is kept at 100%. The design incorporates the handling of borders through the use of symmetric extension. The architecture has been implemented on the Xilinx Virtex2 FPGA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resource Selection (or Query Routing) is an important step in P2P IR. Though analogous to document retrieval in the sense of choosing a relevant subset of resources, resource selection methods have evolved independently from those for document retrieval. Among the reasons for such divergence is that document retrieval targets scenarios where underlying resources are semantically homogeneous, whereas peers would manage diverse content. We observe that semantic heterogeneity is mitigated in the clustered 2-tier P2P IR architecture resource selection layer by way of usage of clustering, and posit that this necessitates a re-look at the applicability of document retrieval methods for resource selection within such a framework. This paper empirically benchmarks document retrieval models against the state-of-the-art resource selection models for the problem of resource selection in the clustered P2P IR architecture, using classical IR evaluation metrics. Our benchmarking study illustrates that document retrieval models significantly outperform other methods for the task of resource selection in the clustered P2P IR architecture. This indicates that clustered P2P IR framework can exploit advancements in document retrieval methods to deliver corresponding improvements in resource selection, indicating potential convergence of these fields for the clustered P2P IR architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Content Centric Network (CCN) is a proposed future internet architecture that is based on the concept of contents name instead of the hosts name followed in the traditional internet architecture. CCN architecture might do changes in the existing internet architecture or might replace it completely. In this paper, we present modifications to the existing Domain Name System (DNS) based on the CCN architecture requirements without changing the existing routing architecture. Hence the proposed solution achieves the benefits of both CCN and existing network infrastructure (i.e. content based routing, independent of host location, caching and content delivery protocols).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In acoustic instruments, the controller and the sound producing system often are one and the same object. If virtualacoustic instruments are to be designed to not only simulate the vibrational behaviour of a real-world counterpart but also to inherit much of its interface dynamics, it would make sense that the physical form of the controller is similar to that of the emulated instrument. The specific physical model configuration discussed here reconnects a (silent) string controller with a modal synthesis string resonator across the real and virtual domains by direct routing of excitation signals and model parameters. The excitation signals are estimated in their original force-like form via careful calibration of the sensor, making use of adaptive filtering techniques to design an appropriate inverse filter. In addition, the excitation position is estimated from sensors mounted under the legs of the bridges on either end of the prototype string controller. The proposed methodology is explained and exemplified with preliminary results obtained with a number of off-line experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advancement of GPS technology has made it possible to use GPS devices as orientation and navigation tools, but also as tools to track spatiotemporal information. GPS tracking data can be broadly applied in location-based services, such as spatial distribution of the economy, transportation routing and planning, traffic management and environmental control. Therefore, knowledge of how to process the data from a standard GPS device is crucial for further use. Previous studies have considered various issues of the data processing at the time. This paper, however, aims to outline a general procedure for processing GPS tracking data. The procedure is illustrated step-by-step by the processing of real-world GPS data of car movements in Borlänge in the centre of Sweden.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chaque année le feu brûle quelques dizaines de milliers d’hectares de forêts québécoises. Le coût annuel de prévention et de lutte contre les feux de forêts au Québec est de l’ordre de plusieurs dizaines de millions de dollars. Le présent travail contribue à la réduction de ces coûts à travers l’automatisation du processus de planification des opérations de suppression des feux de forêts majeurs. Pour ce faire, un modèle mathématique linéaire en nombres entiers a été élaboré, résolu et testé; introduisant un nouveau cas particulier à la littérature des Problèmes de Tournées de Véhicules (VRP). Ce modèle mathématique concerne le déploiement aérien des ressources disponibles pour l’extinction des incendies. Le modèle élaboré a été testé avec CPLEX sur des cas tirés de données réelles. Il a permis de réduire le temps de planification des opérations d’extinction des feux de forêts majeurs de 75% dans les situations courantes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'elaborato tratta il ruolo del porto di Ravenna nell'import/export di prodotti ortofrutticoli. Dopo una accurata analisi dei dati, lo studio delle rotte marittime e l'uso di Dbms per gestire un database complesso, si propone un modello di programmazione lineare intera su un problema di ship routing, ship scheduling e full ship-load balancing. L'obiettivo è di massimizzare il profitto derivante da un prezzo di vendita e soggetto ai vari costi della logistica. Il modello sceglie la rotta ottimale da effettuare, in termini di ordine di visita dei vari porti che hanno un import e un export dei prodotti studiati. Inoltre, è in grado di gestire lo scorrere del tempo, fornendo come soluzione il giorno ottimale di visita dei vari porti considerati. Infine, trova la ripartizione ottima del numero di container a bordo della nave per ogni tipologia di prodotto.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O elevado custo da operação de recolha de resíduos urbanos e a necessidade de cumprir metas dispostas em instrumentos legais são duas motivações que conduzem à necessidade de otimizar o serviço da recolha de resíduos. A otimização da recolha de resíduos é um problema de elevada complexidade de resolução que envolve a análise de redes de transporte. O presente trabalho propõe soluções de otimização da recolha de resíduos urbanos indiferenciados, a partir de um caso de estudo: o percurso RSU I 06 do município de Aveiro. Para este efeito, recorreu-se a uma aplicação informática de representação e análise geográfica: software ArcGIS denominada ArcMap e sua extensão Network Analyst, desenvolvida para calcular circuitos otimizados entre pontos de interesse. O trabalho realizado de aplicação do Network Analyst inclui a apresentação de duas das suas funcionalidades (Route e Vehicle Routing Problem). Em relação ao atual circuito de recolha e com base nos ensaios efetuados, foi possível concluir que esta aplicação permite obter circuitos de recolha otimizados mais curtos ou com menor duração. Contudo, ao nível da gestão permitiu concluir que, com a atual capacidade de contentorização, seria viável reduzir a frequência de recolha de seis vezes por semana para metade, dividindo a área de recolha em duas áreas, de acordo com as necessidades de cada local, reduzindo ainda mais o esforço de recolha. A aplicação do Network Analyst ao caso de estudo, permitiu concluir que é um software com muito interesse no processo de gestão da recolha de resíduos urbanos, apesar de apresentar algumas restrições de aplicação e que a qualidade/eficácia do procedimento de otimização depende da qualidade dos dados de entrada, em particular do descritivo geográfico disponível para os arruamentos e, em larga medida, também depende do modelo de gestão considerado.