865 resultados para Vehicule routing
Resumo:
We consider a scenario where the communication nodes in a sensor network have limited energy, and the objective is to maximize the aggregate bits transported from sources to respective destinations before network partition due to node deaths. This performance metric is novel, and captures the useful information that a network can provide over its lifetime. The optimization problem that results from our approach is nonlinear; however, we show that it can be converted to a Multicommodity Flow (MCF) problem that yields the optimal value of the metric. Subsequently, we compare the performance of a practical routing strategy, based on Node Disjoint Paths (NDPs), with the ideal corresponding to the MCF formulation. Our results indicate that the performance of NDP-based routing is within 7.5% of the optimal.
Resumo:
A scheme for built-in self-test of analog signals with minimal area overhead for measuring on-chip voltages in an all-digital manner is presented. The method is well suited for a distributed architecture, where the routing of analog signals over long paths is minimized. A clock is routed serially to the sampling heads placed at the nodes of analog test voltages. This sampling head present at each test node, which consists of a pair of delay cells and a pair of flip-flops, locally converts the test voltage to a skew between a pair of subsampled signals, thus giving rise to as many subsampled signal pairs as the number of nodes. To measure a certain analog voltage, the corresponding subsampled signal pair is fed to a delay measurement unit to measure the skew between this pair. The concept is validated by designing a test chip in a UMC 130-nm CMOS process. Sub-millivolt accuracy for static signals is demonstrated for a measurement time of a few seconds, and an effective number of bits of 5.29 is demonstrated for low-bandwidth signals in the absence of sample-and-hold circuitry.
Resumo:
The growing number of applications and processing units in modern Multiprocessor Systems-on-Chips (MPSoCs) come along with reduced time to market. Different IP cores can come from different vendors, and their trust levels are also different, but typically they use Network-on-Chip (NoC) as their communication infrastructure. An MPSoC can have multiple Trusted Execution Environments (TEEs). Apart from performance, power, and area research in the field of MPSoC, robust and secure system design is also gaining importance in the research community. To build a secure system, the designer must know beforehand all kinds of attack possibilities for the respective system (MPSoC). In this paper we survey the possible attack scenarios on present-day MPSoCs and investigate a new attack scenario, i.e., router attack targeted toward NoC architecture. We show the validity of this attack by analyzing different present-day NoC architectures and show that they are all vulnerable to this type of attack. By launching a router attack, an attacker can control the whole chip very easily, which makes it a very serious issue. Both routing tables and routing logic-based routers are vulnerable to such attacks. In this paper, we address attacks on routing tables. We propose different monitoring-based countermeasures against routing table-based router attack in an MPSoC having multiple TEEs. Synthesis results show that proposed countermeasures, viz. Runtime-monitor, Restart-monitor, Intermediate manager, and Auditor, occupy areas that are 26.6, 22, 0.2, and 12.2 % of a routing table-based router area. Apart from these, we propose Ejection address checker and Local monitoring module inside a router that cause 3.4 and 10.6 % increase of a router area, respectively. Simulation results are also given, which shows effectiveness of proposed monitoring-based countermeasures.
Resumo:
Wireless Sensor Networks have gained popularity due to their real time applications and low-cost nature. These networks provide solutions to scenarios that are critical, complicated and sensitive like military fields, habitat monitoring, and disaster management. The nodes in wireless sensor networks are highly resource constrained. Routing protocols are designed to make efficient utilization of the available resources in communicating a message from source to destination. In addition to the resource management, the trustworthiness of neighboring nodes or forwarding nodes and the energy level of the nodes to keep the network alive for longer duration is to be considered. This paper proposes a QoS Aware Trust Metric based Framework for Wireless Sensor Networks. The proposed framework safeguards a wireless sensor network from intruders by considering the trustworthiness of the forwarder node at every stage of multi-hop routing. Increases network lifetime by considering the energy level of the node, prevents the adversary from tracing the route from source to destination by providing path variation. The framework is built on NS2 Simulator. Experimental results show that the framework provides energy balance through establishment of trustworthy paths from the source to the destination. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
A person walks along a line (which could be an idealisation of a forest trail, for example), placing relays as he walks, in order to create a multihop network for connecting a sensor at a point along the line to a sink at the start of the line. The potential placement points are equally spaced along the line, and at each such location the decision to place or not to place a relay is based on link quality measurements to the previously placed relays. The location of the sensor is unknown apriori, and is discovered as the deployment agent walks. In this paper, we extend our earlier work on this class of problems to include the objective of achieving a 2-connected multihop network. We propose a network cost objective that is additive over the deployed relays, and accounts for possible alternate routing over the multiple available paths. As in our earlier work, the problem is formulated as a Markov decision process. Placement algorithms are obtained for two source location models, which yield a discounted cost MDP and an average cost MDP. In each case we obtain structural results for an optimal policy, and perform a numerical study that provides insights into the advantages and disadvantages of multi-connectivity. We validate the results obtained from numerical study experimentally in a forest-like environment.
Resumo:
The diffusive wave equation with inhomogeneous terms representing hydraulics with uniform or concentrated lateral inflow intoa river is theoretically investigated in the current paper. All the solutions have been systematically expressed in a unified form interms of response function or so called K-function. The integration of K-function obtained by using Laplace transform becomesS-function, which is examined in detail to improve the understanding of flood routing characters. The backwater effects usuallyresulting in the discharge reductions and water surface elevations upstream due to both the downstream boundary and lateral infloware analyzed. With a pulse discharge in upstream boundary inflow, downstream boundary outflow and lateral inflow respectively,hydrographs of a channel are routed by using the S-functions. Moreover, the comparisons of hydrographs in infinite, semi-infiniteand finite channels are pursued to exhibit the different backwater effects due to a concentrated lateral inflow for various channeltypes.
Resumo:
通过引入水深连接方程,耦合了干、支流河道的水流运动,给出了它们的流量分配关系,建立了交汇、分流河道洪水演进模型.对交汇河道中水流顶托作用随河道参数的变化规律进行了研究.分析了干、支流洪峰遭遇现象,指出干流和支流的洪峰遭遇是1998年长江大洪水干流高洪水位的重要原因之一.并定性解释长江干流荆江河段的裁弯取直和长江分流河道的淤积对河道水流的影响.
Resumo:
This report describes cases relating to the management of national marine sanctuaries in which certain scientific information was required so managers could make decisions that effectively protected trust resources. The cases presented represent only a fraction of difficult issues that marine sanctuary managers deal with daily. They include, among others, problems related to wildlife disturbance, vessel routing, marine reserve placement, watershed management, oil spill response, and habitat restoration. Scientific approaches to address these problems vary significantly, and include literature surveys, data mining, field studies (monitoring, mapping, observations, and measurement), geospatial and biogeographic analysis, and modeling. In most cases there is also an element of expert consultation and collaboration among multiple partners, agencies with resource protection responsibilities, and other users and stakeholders. The resulting management responses may involve direct intervention (e.g., for spill response or habitat restoration issues), proposal of boundary alternatives for marine sanctuaries or reserves, changes in agency policy or regulations, making recommendations to other agencies with resource protection responsibilities, proposing changes to international or domestic shipping rules, or development of new education or outreach programs. (PDF contains 37 pages.)
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
25 p.
Resumo:
Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.
The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.
The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.
To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.
Resumo:
This thesis presents a novel active mirror technology based on carbon fiber composites and replication manufacturing processes. Multiple additional layers are implemented into the structure in order to provide the reflective layer, actuation capabilities and electrode routing. The mirror is thin, lightweight, and has large actuation capabilities. These features, along with the associated manufacturing processes, represent a significant change in design compared to traditional optics. Structural redundancy in the form of added material or support structures is replaced by thin, unsupported lightweight substrates with large actuation capabilities.
Several studies motivated by the desire to improve as-manufactured figure quality are performed. Firstly, imperfections in thin CFRP laminates and their effect on post-cure shape errors are studied. Numerical models are developed and compared to experimental measurements on flat laminates. Techniques to mitigate figure errors for thicker laminates are also identified. A method of properly integrating the reflective facesheet onto the front surface of the CFRP substrate is also presented. Finally, the effect of bonding multiple initially flat active plates to the backside of a curved CFRP substrate is studied. Figure deformations along with local surface defects are predicted and characterized experimentally. By understanding the mechanics behind these processes, significant improvements to the overall figure quality have been made.
Studies related to the actuation response of the mirror are also performed. The active properties of two materials are characterized and compared. Optimal active layer thicknesses for thin surface-parallel schemes are determined. Finite element simulations are used to make predictions on shape correction capabilities, demonstrating high correctabiliity and stroke over low-order modes. The effect of actuator saturation is studied and shown to significantly degrade shape correction performance.
The initial figure as well as actuation capabilities of a fully-integrated active mirror prototype are characterized experimentally using a Projected Hartmann test. A description of the test apparatus is presented along with two verification measurements. The apparatus is shown to accurately capture both high-amplitude low spatial-frequency figure errors as well as those at lower amplitudes but higher spatial frequencies. A closed-loop figure correction is performed, reducing figure errors by 94%.
Resumo:
报道了基于双面反射镜的N×N光开关器件。介绍了使用双面反射镜的2×2, 4×4光开关的集成光路设计和工作原理; 采用Benes网络, 以2×2和4×4光开关为基本单元的N×N光开关器件的整体结构, 并根据“一笔画”原理, 分析了4×4, 8×8和16×16光开关矩阵的可重排无阻塞特性和光开关矩阵的光路选择算法。最后, 基于2×2, 4×4光开关技术制备了16×16光开关矩阵。测试表明, 该器件具有良好的插入损耗、回波损耗、串扰和开关时间等性能, 从而验证了设计思想和工艺的可行性。在基于双面反射镜的光开关矩