897 resultados para Multicommodity network design problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a single-hop data-gathering sensor network, consisting of a set of sensor nodes that transmit data periodically to a base-station. We are interested in maximizing the lifetime of this network. With our definition of network lifetime and the assumption that the radio transmission energy consumption forms the most significant portion of the total energy consumption at a sensor node, we attempt to enhance the network lifetime by reducing the transmission energy budget of sensor nodes by exploiting three system-level opportunities. We pose the problem of maximizing lifetime as a max-min optimization problem subject to the constraint of successful data collection and limited energy supply at each node. This turns out to be an extremely difficult optimization to solve. To reduce the complexity of this problem, we allow the sensor nodes and the base-station to interactively communicate with each other and employ instantaneous decoding at the base-station. The chief contribution of the paper is to show that the computational complexity of our problem is determined by the complex interplay of various system-level opportunities and challenges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the incentive compatible broadcast (ICB) problem in ad hoc wireless networks with selfish nodes. We design a Bayesian incentive compatible Broadcast (BIC-B) protocol to address this problem. VCG mechanism based schemes have been popularly used in the literature to design dominant strategy incentive compatible (DSIC) protocols for ad hoe wireless networks. VCG based mechanisms have two critical limitations: (i) the network is required to he bi-connected, (ii) the resulting protocol is not budget balanced. Our proposed BIC-B protocol overcomes these difficulties. We also prove the optimality of the proposed scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (1/eta)(t), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterisation of the optimal operating point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We are concerned with maximizing the lifetime of a data-gathering wireless sensor network consisting of set of nodes directly communicating with a base-station. We model this scenario as the m-message interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). With this framework, we show that m-message interactive communication can indeed enhance network lifetime. Both worst-case and average-case performances are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing numbers of medical schools in Australia and overseas have moved away from didactic teaching methodologies and embraced problem-based learning (PBL) to improve clinical reasoning skills and communication skills as well as to encourage self-directed lifelong learning. In January 2005, the first cohort of students entered the new MBBS program at the Griffith University School of Medicine, Gold Coast, to embark upon an exciting, fully integrated curriculum using PBL, combining electronic delivery, communication and evaluation systems incorporating cognitive principles that underpin the PBL process. This chapter examines the educational philosophies and design of the e-learning environment underpinning the processes developed to deliver, monitor and evaluate the curriculum. Key initiatives taken to promote student engagement and innovative and distinctive approaches to student learning at Griffith promoted within the conceptual model for the curriculum are (a) Student engagement, (b) Pastoral care, (c) Staff engagement, (d) Monitoring and (e) Curriculum/Program Review. © 2007 Springer-Verlag Berlin Heidelberg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we explore simultaneous geometry design and material selection for statically determinate trusses by posing it as a continuous optimization problem. The underlying principles of our approach are structural optimization and Ashby’s procedure for material selection from a database. For simplicity and ease of initial implementation, only static loads are considered in this work with the intent of maximum stiffness, minimum weight/cost, and safety against failure. Safety of tensile and compression members in the truss is treated differently to prevent yield and buckling failures, respectively. Geometry variables such as lengths and orientations of members are taken to be the design variables in an assumed layout. Areas of cross-section of the members are determined to satisfy the failure constraints in each member. Along the lines of Ashby’s material indices, a new design index is derived for trusses. The design index helps in choosing the most suitable material for any geometry of the truss. Using the design index, both the design space and the material database are searched simultaneously using gradient-based optimization algorithms. The important feature of our approach is that the formulated optimization problem is continuous, although the material selection from a database is an inherently discrete problem. A few illustrative examples are included. It is observed that the method is capable of determining the optimal topology in addition to optimal geometry when the assumed layout contains more links than are necessary for optimality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, the stability of anchored cantilever sheet pile wall in sandy soils is investigated using reliability analysis. Targeted stability is formulated as an optimization problem in the framework of an inverse first order reliability method. A sensitivity analysis is conducted to investigate the effect of parameters influencing the stability of sheet pile wall. Backfill soil properties, soil - steel pile interface friction angle, depth of the water table from the top of the sheet pile wall, total depth of embedment below the dredge line, yield strength of steel, section modulus of steel sheet pile, and anchor pull are all treated as random variables. The sheet pile wall system is modeled as a series of failure mode combination. Penetration depth, anchor pull, and section modulus are calculated for various target component and system reliability indices based on three limit states. These are: rotational failure about the position of the anchor rod, expressed in terms of moment ratio; sliding failure mode, expressed in terms of force ratio; and flexural failure of the steel sheet pile wall, expressed in terms of the section modulus ratio. An attempt is made to propose reliability based design charts considering the failure criteria as well as the variability in the parameters. The results of the study are compared with studies in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of designing high rate, full diversity noncoherent space-time block codes (STBCs) with low encoding and decoding complexity is addressed. First, the notion of g-group encodable and g-group decodable linear STBCs is introduced. Then for a known class of rate-1 linear designs, an explicit construction of fully-diverse signal sets that lead to four-group encodable and four-group decodable differential scaled unitary STBCs for any power of two number of antennas is provided. Previous works on differential STBCs either sacrifice decoding complexity for higher rate or sacrifice rate for lower decoding complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a generic method/model for multi-objective design optimization of laminated composite components, based on vector evaluated particle swarm optimization (VEPSO) algorithm. VEPSO is a novel, co-evolutionary multi-objective variant of the popular particle swarm optimization algorithm (PSO). In the current work a modified version of VEPSO algorithm for discrete variables has been developed and implemented successfully for the, multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; failure mechanism based failure criteria, Maximum stress failure criteria and the Tsai-Wu failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a multicommodity flow problem on a complete graph whose edges have random, independent, and identically distributed capacities. We show that, as the number of nodes tends to infinity, the maximumutility, given by the average of a concave function of each commodity How, has an almost-sure limit. Furthermore, the asymptotically optimal flow uses only direct and two-hop paths, and can be obtained in a distributed manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We share our experience in planning, designing and deploying a wireless sensor network of one square kilometre area. Environmental data such as soil moisture, temperature, barometric pressure, and relative humidity are collected in this area situated in the semi-arid region of Karnataka, India. It is a hope that information derived from this data will benefit the marginal farmer towards improving his farming practices. Soon after establishing the need for such a project, we begin by showing the big picture of such a data gathering network, the software architecture we have used, the range measurements needed for determining the sensor density, and the packaging issues that seem to play a crucial role in field deployments. Our field deployment experiences include designing with intermittent grid power, enhancing software tools to aid quicker and effective deployment, and flash memory corruption. The first results on data gathering look encouraging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of designing high rate, full diversity noncoherent space-time block codes (STBCs) with low encoding and decoding complexity is addressed. First, the notion of g-group encodable and g-group decodable linear STBCs is introduced. Then for a known class of rate-1 linear designs, an explicit construction of fully-diverse signal sets that lead to four-group encodable and four-group decodable differential scaled unitary STBCs for any power of two number of antennas is provided. Previous works on differential STBCs either sacrifice decoding complexity for higher rate or sacrifice rate for lower decoding complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The future of civic engagement is characterised by both technological innovation as well as new technological user practices that are fuelled by trends towards mobile, personal devices; broadband connectivity; open data; urban interfaces; and cloud computing. These technology trends are progressing at a rapid pace, and have led global technology vendors to package and sell the “Smart City” as a centralised service delivery platform predicted to optimise and enhance cities’ key performance indicators – and generate a profitable market. The top-down deployment of these large and proprietary technology platforms have helped sectors such as energy, transport, and healthcare to increase efficiencies. However, an increasing number of scholars and commentators warn of another “IT bubble” emerging. Along with some city leaders, they argue that the top-down approach does not fit the governance dynamics and values of a liberal democracy when applied across sectors. A thorough understanding is required, of the socio-cultural nuances of how people work, live, play across different environments, and how they employ social media and mobile devices to interact with, engage in, and constitute public realms. Although the term “slacktivism” is sometimes used to denote a watered down version of civic engagement and activism that is reduced to clicking a “Like” button and signing online petitions, we believe that we are far from witnessing another Biedermeier period that saw people focus on the domestic and the non-political. There is plenty of evidence to the contrary, such as post-election violence in Kenya in 2008, the Occupy movements in New York, Hong Kong and elsewhere, the Arab Spring, Stuttgart 21, Fukushima, the Taksim Gezi Park in Istanbul, and the Vinegar Movement in Brazil in 2013. These examples of civic action shape the dynamics of governments, and in turn, call for new processes to be incorporated into governance structures. Participatory research into these new processes across the triad of people, place and technology is a significant and timely investment to foster productive, sustainable, and liveable human habitats. With this article, we want to reframe the current debates in academia and priorities in industry and government to allow citizens and civic actors to take their rightful centrepiece place in civic movements. This calls for new participatory approaches for co-inquiry and co-design. It is an evolving process with an explicit agenda to facilitate change, and we propose participatory action research (PAR) as an indispensable component in the journey to develop new governance infrastructures and practices for civic engagement. We do not limit our definition of civic technologies to tools specifically designed to simply enhance government and governance, such as renewing your car registration online or casting your vote electronically on election day. Rather, we are interested in civic media and technologies that foster citizen engagement in the widest sense, and particularly the participatory design of such civic technologies that strive to involve citizens in political debate and action as well as question conventional approaches to political issues. The rationale for this approach is an alternative to smart cities in a “perpetual tomorrow,” based on many weak and strong signals of civic actions revolving around technology seen today. It seeks to emphasise and direct attention to active citizenry over passive consumerism, human actors over human factors, culture over infrastructure, and prosperity over efficiency. First, we will have a look at some fundamental issues arising from applying simplistic smart city visions to the kind of a problem a city poses. We focus on the touch points between “the city” and its civic body, the citizens. In order to provide for meaningful civic engagement, the city must provide appropriate interfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of the design of the quadratic weir notch, which finds application in the proportionate method of flow measurement in a by-pass, such that the discharge through it is proportional to the square root of the head measured above a certain datum. The weir notch consists of a bottom in the form of a rectangular weir of width 2W and depth a over which a designed curve is fitted. A theorem concerning the flow through compound weirs called the “slope discharge continuity theorem” is discussed and proved. Using this, the problem is reduced to the determination of an exact solution to Volterra's integral equation in Abel's form. It is shown that in the case of a quadratic weir notch, the discharge is proportional to the square root of the head measured above a datum Image a above the crest of the weir. Further, it is observed that the function defining the shape of the weir is rapidly convergent and its value almost approximates to zero at distances of 3a and above from the crest of the weir. This interesting and significant behaviour of the function incidentally provides a very good approximate solution to a particular Fredholm integral equation of the first kind, transforming the notch into a device called a “proportional-orifice”. A new concept of a “notch-orifice” capable of passing a discharge proportional to the square root of the head (above a particular datum) while acting both as a notch, and as an orifice, is given. A typical experiment with one such notch-orifice, having A = 4 in., and W = 6 in., shows a remarkable agreement with the theory and is found to have a constant coefficient of discharge of 0.61 in the ranges of both notch and orifice.