746 resultados para wormhole routing
Resumo:
Content Centric Network (CCN) is a proposed future internet architecture that is based on the concept of contents name instead of the hosts name followed in the traditional internet architecture. CCN architecture might do changes in the existing internet architecture or might replace it completely. In this paper, we present modifications to the existing Domain Name System (DNS) based on the CCN architecture requirements without changing the existing routing architecture. Hence the proposed solution achieves the benefits of both CCN and existing network infrastructure (i.e. content based routing, independent of host location, caching and content delivery protocols).
Resumo:
In acoustic instruments, the controller and the sound producing system often are one and the same object. If virtualacoustic instruments are to be designed to not only simulate the vibrational behaviour of a real-world counterpart but also to inherit much of its interface dynamics, it would make sense that the physical form of the controller is similar to that of the emulated instrument. The specific physical model configuration discussed here reconnects a (silent) string controller with a modal synthesis string resonator across the real and virtual domains by direct routing of excitation signals and model parameters. The excitation signals are estimated in their original force-like form via careful calibration of the sensor, making use of adaptive filtering techniques to design an appropriate inverse filter. In addition, the excitation position is estimated from sensors mounted under the legs of the bridges on either end of the prototype string controller. The proposed methodology is explained and exemplified with preliminary results obtained with a number of off-line experiments.
Resumo:
The advancement of GPS technology has made it possible to use GPS devices as orientation and navigation tools, but also as tools to track spatiotemporal information. GPS tracking data can be broadly applied in location-based services, such as spatial distribution of the economy, transportation routing and planning, traffic management and environmental control. Therefore, knowledge of how to process the data from a standard GPS device is crucial for further use. Previous studies have considered various issues of the data processing at the time. This paper, however, aims to outline a general procedure for processing GPS tracking data. The procedure is illustrated step-by-step by the processing of real-world GPS data of car movements in Borlänge in the centre of Sweden.
Resumo:
Chaque année le feu brûle quelques dizaines de milliers d’hectares de forêts québécoises. Le coût annuel de prévention et de lutte contre les feux de forêts au Québec est de l’ordre de plusieurs dizaines de millions de dollars. Le présent travail contribue à la réduction de ces coûts à travers l’automatisation du processus de planification des opérations de suppression des feux de forêts majeurs. Pour ce faire, un modèle mathématique linéaire en nombres entiers a été élaboré, résolu et testé; introduisant un nouveau cas particulier à la littérature des Problèmes de Tournées de Véhicules (VRP). Ce modèle mathématique concerne le déploiement aérien des ressources disponibles pour l’extinction des incendies. Le modèle élaboré a été testé avec CPLEX sur des cas tirés de données réelles. Il a permis de réduire le temps de planification des opérations d’extinction des feux de forêts majeurs de 75% dans les situations courantes.
Resumo:
Viaggiare da un punto all'altro dell'universo muovendosi in uno spazio-tempo piatto richiede tempi talmente colossali da risultare impossibile per la nostra razza; pertanto, un viaggio interstellare potrebbe essere realizzato solo per mezzo di topologie relativistiche in grado di accorciare la distanza fra i punti dell'universo. Dopo aver dato una serie di motivazioni per cui i buchi neri ed il ponte di Einstein-Rosen non sono adatti ad essere impiegati viene introdotta una particolare classe di soluzioni, presentata per la prima volta da Michael S. Morris e Kip S. Thorne, delle equazioni di Einstein: essa descrive wormholes i quali, almeno in linea di principio, risultano attraversabili dagli esseri umani in quanto non presentano un orizzonte degli eventi sulla gola. Quest'ultima proprietà, insieme alle equazioni di campo di Einstein, pone dei vincoli piuttosto estremi sul tipo di materiale in grado di dar luogo alla curvatura spazio-temporale del wormhole: nella gola del wormhole la materia deve possedere una tensione radiale di enorme intensità, dell'ordine di quella presente nel centro delle stelle di neutroni più massive per gole con un raggio di appena qualche kilometro. Inoltre, questa tensione dev'essere maggiore della densità di energia del materiale: ad oggi non si conosce alcun materiale con quest'ultima proprietà, la quale viola entrambe le "condizioni sull'energia" alla base di teoremi molto importanti e verificati della relatività generale. L'esistenza di questa materia non può essere esclusa a priori, visto che non esiste prova sperimentale o matematica della sua irrealisticità fisica, ma non essendo mai stata osservata è importante assicurarsi di impiegarne il meno possibile nel wormhole: questo ci porterà a mostrare che i wormholes in cui il materiale esotico presenta una densità di energia negativa per gli osservatori statici sono i più adatti al viaggio interstellare.
Resumo:
L'elaborato tratta il ruolo del porto di Ravenna nell'import/export di prodotti ortofrutticoli. Dopo una accurata analisi dei dati, lo studio delle rotte marittime e l'uso di Dbms per gestire un database complesso, si propone un modello di programmazione lineare intera su un problema di ship routing, ship scheduling e full ship-load balancing. L'obiettivo è di massimizzare il profitto derivante da un prezzo di vendita e soggetto ai vari costi della logistica. Il modello sceglie la rotta ottimale da effettuare, in termini di ordine di visita dei vari porti che hanno un import e un export dei prodotti studiati. Inoltre, è in grado di gestire lo scorrere del tempo, fornendo come soluzione il giorno ottimale di visita dei vari porti considerati. Infine, trova la ripartizione ottima del numero di container a bordo della nave per ogni tipologia di prodotto.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
O elevado custo da operação de recolha de resíduos urbanos e a necessidade de cumprir metas dispostas em instrumentos legais são duas motivações que conduzem à necessidade de otimizar o serviço da recolha de resíduos. A otimização da recolha de resíduos é um problema de elevada complexidade de resolução que envolve a análise de redes de transporte. O presente trabalho propõe soluções de otimização da recolha de resíduos urbanos indiferenciados, a partir de um caso de estudo: o percurso RSU I 06 do município de Aveiro. Para este efeito, recorreu-se a uma aplicação informática de representação e análise geográfica: software ArcGIS denominada ArcMap e sua extensão Network Analyst, desenvolvida para calcular circuitos otimizados entre pontos de interesse. O trabalho realizado de aplicação do Network Analyst inclui a apresentação de duas das suas funcionalidades (Route e Vehicle Routing Problem). Em relação ao atual circuito de recolha e com base nos ensaios efetuados, foi possível concluir que esta aplicação permite obter circuitos de recolha otimizados mais curtos ou com menor duração. Contudo, ao nível da gestão permitiu concluir que, com a atual capacidade de contentorização, seria viável reduzir a frequência de recolha de seis vezes por semana para metade, dividindo a área de recolha em duas áreas, de acordo com as necessidades de cada local, reduzindo ainda mais o esforço de recolha. A aplicação do Network Analyst ao caso de estudo, permitiu concluir que é um software com muito interesse no processo de gestão da recolha de resíduos urbanos, apesar de apresentar algumas restrições de aplicação e que a qualidade/eficácia do procedimento de otimização depende da qualidade dos dados de entrada, em particular do descritivo geográfico disponível para os arruamentos e, em larga medida, também depende do modelo de gestão considerado.
Resumo:
The wide adaptation of Internet Protocol (IP) as de facto protocol for most communication networks has established a need for developing IP capable data link layer protocol solutions for Machine to machine (M2M) and Internet of Things (IoT) networks. However, the wireless networks used for M2M and IoT applications usually lack the resources commonly associated with modern wireless communication networks. The existing IP capable data link layer solutions for wireless IoT networks provide the necessary overhead minimising and frame optimising features, but are often built to be compatible only with IPv6 and specific radio platforms. The objective of this thesis is to design IPv4 compatible data link layer for Netcontrol Oy's narrow band half-duplex packet data radio system. Based on extensive literature research, system modelling and solution concept testing, this thesis proposes the usage of tunslip protocol as the basis for the system data link layer protocol development. In addition to the functionality of tunslip, this thesis discusses the additional network, routing, compression, security and collision avoidance changes required to be made to the radio platform in order for it to be IP compatible while still being able to maintain the point-to-multipoint and multi-hop network characteristics. The data link layer design consists of the radio application, dynamic Maximum Transmission Unit (MTU) optimisation daemon and the tunslip interface. The proposed design uses tunslip for creating an IP capable data link protocol interface. The radio application receives data from tunslip and compresses the packets and uses the IP addressing information for radio network addressing and routing before forwarding the message to radio network. The dynamic MTU size optimisation daemon controls the tunslip interface maximum MTU size according to the link quality assessment calculated from the radio network diagnostic data received from the radio application. For determining the usability of tunslip as the basis for data link layer protocol, testing of the tunslip interface is conducted with both IEEE 802.15.4 radios and packet data radios. The test cases measure the radio network usability for User Datagram Protocol (UDP) based applications without applying any header or content compression. The test results for the packet data radios reveal that the typical success rate for packet reception through a single-hop link is above 99% with a round-trip-delay of 0.315s for 63B packets.
O problema de alocação de berços: um estudo das heurísticas simulated annealing e algoritmo genético
Resumo:
Este trabalho apresenta um estudo de caso das heurísticas Simulated Annealing e Algoritmo Genético para um problema de grande relevância encontrado no sistema portuário, o Problema de Alocação em Berços. Esse problema aborda a programação e a alocação de navios às áreas de atracação ao longo de um cais. A modelagem utilizada nesta pesquisa é apresentada por Mauri (2008) [28] que trata do problema como uma Problema de Roteamento de Veículos com Múltiplas Garagens e sem Janelas de Tempo. Foi desenvolvido um ambiente apropriado para testes de simulação, onde o cenário de análise foi constituido a partir de situações reais encontradas na programação de navios de um terminal de contêineres. Os testes computacionais realizados mostram a performance das heurísticas em relação a função objetivo e o tempo computacional, a m de avaliar qual das técnicas apresenta melhores resultados.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Resumo:
Reconstructing Northern Hemisphere ice-sheet oscillations and meltwater routing to the ocean is important to better understand the mechanisms behind abrupt climate changes. To date, research efforts have mainly focused on the North American (Laurentide) ice-sheets (LIS), leaving the potential role of the European Ice Sheet (EIS), and of the Scandinavian ice-sheet (SIS) in particular, largely unexplored. Using neodymium isotopes in detrital sediments deposited off the Channel River, we provide a continuous and well-dated record for the evolution of the EIS southern margin through the end of the last glacial period and during the deglaciation. Our results reveal that the evolution of EIS margins was accompanied with substantial ice recession (especially of the SIS) and simultaneous release of meltwater to the North Atlantic. These events occurred both in the course of the EIS to its LGM position (i.e., during Heinrich Stadial –HS– 3 and HS2; ∼31–29 ka and ∼26–23 ka, respectively) and during the deglaciation (i.e., at ∼22 ka, ∼20–19 ka and from 18.2 ± 0.2 to 16.7 ± 0.2 ka that corresponds to the first part of HS1). The deglaciation was discontinuous in character, and similar in timing to that of the southern LIS margin, with moderate ice-sheet retreat (from 22.5 ± 0.2 ka in the Baltic lowlands) as soon as the northern summer insolation increase (from ∼23 ka) and an acceleration of the margin retreat thereafter (from ∼20 ka). Importantly, our results show that EIS retreat events and release of meltwater to the North Atlantic during the deglaciation coincide with AMOC destabilisation and interhemispheric climate changes. They thus suggest that the EIS, together with the LIS, could have played a critical role in the climatic reorganization that accompanied the last deglaciation. Finally, our data suggest that meltwater discharges to the North Atlantic produced by large-scale recession of continental parts of Northern Hemisphere ice sheets during HS, could have been a possible source for the oceanic perturbations (i.e., AMOC shutdown) responsible for the marine-based ice stream purge cycle, or so-called HE's, that punctuate the last glacial period.
Resumo:
Os mecanismos e técnicas do domínio de Tempo-Real são utilizados quando existe a necessidade de um sistema, seja este um sistema embutido ou de grandes dimensões, possuir determinadas características que assegurem a qualidade de serviço do sistema. Os Sistemas de Tempo-Real definem-se assim como sistemas que possuem restrições temporais rigorosas, que necessitam de apresentar altos níveis de fiabilidade de forma a garantir em todas as instâncias o funcionamento atempado do sistema. Devido à crescente complexidade dos sistemas embutidos, empregam-se frequentemente arquiteturas distribuídas, onde cada módulo é normalmente responsável por uma única função. Nestes casos existe a necessidade de haver um meio de comunicação entre estes, de forma a poderem comunicar entre si e cumprir a funcionalidade desejadas. Devido à sua elevada capacidade e baixo custo a tecnologia Ethernet tem vindo a ser alvo de estudo, com o objetivo de a tornar num meio de comunicação com a qualidade de serviço característica dos sistemas de tempo-real. Como resposta a esta necessidade surgiu na Universidade de Aveiro, o Switch HaRTES, o qual possui a capacidade de gerir os seus recursos dinamicamente, de modo a fornecer à rede onde é aplicado garantias de Tempo-Real. No entanto, para uma arquitetura de rede ser capaz de fornecer aos seus nós garantias de qualidade serviço, é necessário que exista uma especificação do fluxo, um correto encaminhamento de tráfego, reserva de recursos, controlo de admissão e um escalonamento de pacotes. Infelizmente, o Switch HaRTES apesar de possuir todas estas características, não suporta protocolos standards. Neste documento é apresentado então o trabalho que foi desenvolvido para a integração do protocolo SRP no Switch HaRTES.
Resumo:
International audience
Resumo:
The U.S. railroad companies spend billions of dollars every year on railroad track maintenance in order to ensure safety and operational efficiency of their railroad networks. Besides maintenance costs, other costs such as train accident costs, train and shipment delay costs and rolling stock maintenance costs are also closely related to track maintenance activities. Optimizing the track maintenance process on the extensive railroad networks is a very complex problem with major cost implications. Currently, the decision making process for track maintenance planning is largely manual and primarily relies on the knowledge and judgment of experts. There is considerable potential to improve the process by using operations research techniques to develop solutions to the optimization problems on track maintenance. In this dissertation study, we propose a range of mathematical models and solution algorithms for three network-level scheduling problems on track maintenance: track inspection scheduling problem (TISP), production team scheduling problem (PTSP) and job-to-project clustering problem (JTPCP). TISP involves a set of inspection teams which travel over the railroad network to identify track defects. It is a large-scale routing and scheduling problem where thousands of tasks are to be scheduled subject to many difficult side constraints such as periodicity constraints and discrete working time constraints. A vehicle routing problem formulation was proposed for TISP, and a customized heuristic algorithm was developed to solve the model. The algorithm iteratively applies a constructive heuristic and a local search algorithm in an incremental scheduling horizon framework. The proposed model and algorithm have been adopted by a Class I railroad in its decision making process. Real-world case studies show the proposed approach outperforms the manual approach in short-term scheduling and can be used to conduct long-term what-if analyses to yield managerial insights. PTSP schedules capital track maintenance projects, which are the largest track maintenance activities and account for the majority of railroad capital spending. A time-space network model was proposed to formulate PTSP. More than ten types of side constraints were considered in the model, including very complex constraints such as mutual exclusion constraints and consecution constraints. A multiple neighborhood search algorithm, including a decomposition and restriction search and a block-interchange search, was developed to solve the model. Various performance enhancement techniques, such as data reduction, augmented cost function and subproblem prioritization, were developed to improve the algorithm. The proposed approach has been adopted by a Class I railroad for two years. Our numerical results show the model solutions are able to satisfy all hard constraints and most soft constraints. Compared with the existing manual procedure, the proposed approach is able to bring significant cost savings and operational efficiency improvement. JTPCP is an intermediate problem between TISP and PTSP. It focuses on clustering thousands of capital track maintenance jobs (based on the defects identified in track inspection) into projects so that the projects can be scheduled in PTSP. A vehicle routing problem based model and a multiple-step heuristic algorithm were developed to solve this problem. Various side constraints such as mutual exclusion constraints and rounding constraints were considered. The proposed approach has been applied in practice and has shown good performance in both solution quality and efficiency.