967 resultados para optimal rate
Resumo:
We develop an optimal, distributed, and low feedback timer-based selection scheme to enable next generation rate-adaptive wireless systems to exploit multi-user diversity. In our scheme, each user sets a timer depending on its signal to noise ratio (SNR) and transmits a small packet to identify itself when its timer expires. When the SNR-to-timer mapping is monotone non-decreasing, timers of users with better SNRs expire earlier. Thus, the base station (BS) simply selects the first user whose timer expiry it can detect, and transmits data to it at as high a rate as reliably possible. However, timers that expire too close to one another cannot be detected by the BS due to collisions. We characterize in detail the structure of the SNR-to-timer mapping that optimally handles these collisions to maximize the average data rate. We prove that the optimal timer values take only a discrete set of values, and that the rate adaptation policy strongly influences the optimal scheme's structure. The optimal average rate is very close to that of ideal selection in which the BS always selects highest rate user, and is much higher than that of the popular, but ad hoc, timer schemes considered in the literature.
Resumo:
Large-grain synchronous dataflow graphs or multi-rate graphs have the distinct feature that the nodes of the dataflow graph fire at different rates. Such multi-rate large-grain dataflow graphs have been widely regarded as a powerful programming model for DSP applications. In this paper we propose a method to minimize buffer storage requirement in constructing rate-optimal compile-time (MBRO) schedules for multi-rate dataflow graphs. We demonstrate that the constraints to minimize buffer storage while executing at the optimal computation rate (i.e. the maximum possible computation rate without storage constraints) can be formulated as a unified linear programming problem in our framework. A novel feature of our method is that in constructing the rate-optimal schedule, it directly minimizes the memory requirement by choosing the schedule time of nodes appropriately. Lastly, a new circular-arc interval graph coloring algorithm has been proposed to further reduce the memory requirement by allowing buffer sharing among the arcs of the multi-rate dataflow graph. We have constructed an experimental testbed which implements our MBRO scheduling algorithm as well as (i) the widely used periodic admissible parallel schedules (also known as block schedules) proposed by Lee and Messerschmitt (IEEE Transactions on Computers, vol. 36, no. 1, 1987, pp. 24-35), (ii) the optimal scheduling buffer allocation (OSBA) algorithm of Ning and Gao (Conference Record of the Twentieth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Charleston, SC, Jan. 10-13, 1993, pp. 29-42), and (iii) the multi-rate software pipelining (MRSP) algorithm (Govindarajan and Gao, in Proceedings of the 1993 International Conference on Application Specific Array Processors, Venice, Italy, Oct. 25-27, 1993, pp. 77-88). Schedules generated for a number of random dataflow graphs and for a set of DSP application programs using the different scheduling methods are compared. The experimental results have demonstrated a significant improvement (10-20%) in buffer requirements for the MBRO schedules compared to the schedules generated by the other three methods, without sacrificing the computation rate. The MBRO method also gives a 20% average improvement in computation rate compared to Lee's Block scheduling method.
Resumo:
The throughput-optimal discrete-rate adaptation policy, when nodes are subject to constraints on the average power and bit error rate, is governed by a power control parameter, for which a closed-form characterization has remained an open problem. The parameter is essential in determining the rate adaptation thresholds and the transmit rate and power at any time, and ensuring adherence to the power constraint. We derive novel insightful bounds and approximations that characterize the power control parameter and the throughput in closed-form. The results are comprehensive as they apply to the general class of Nakagami-m (m >= 1) fading channels, which includes Rayleigh fading, uncoded and coded modulation, and single and multi-node systems with selection. The results are appealing as they are provably tight in the asymptotic large average power regime, and are designed and verified to be accurate even for smaller average powers.
Resumo:
An opportunistic, rate-adaptive system exploits multi-user diversity by selecting the best node, which has the highest channel power gain, and adapting the data rate to selected node's channel gain. Since channel knowledge is local to a node, we propose using a distributed, low-feedback timer backoff scheme to select the best node. It uses a mapping that maps the channel gain, or, in general, a real-valued metric, to a timer value. The mapping is such that timers of nodes with higher metrics expire earlier. Our goal is to maximize the system throughput when rate adaptation is discrete, as is the case in practice. To improve throughput, we use a pragmatic selection policy, in which even a node other than the best node can be selected. We derive several novel, insightful results about the optimal mapping and develop an algorithm to compute it. These results bring out the inter-relationship between the discrete rate adaptation rule, optimal mapping, and selection policy. We also extensively benchmark the performance of the optimal mapping with several timer and opportunistic multiple access schemes considered in the literature, and demonstrate that the developed scheme is effective in many regimes of interest.
Resumo:
We consider the problem of characterizing the minimum average delay, or equivalently the minimum average queue length, of message symbols randomly arriving to the transmitter queue of a point-to-point link which dynamically selects a (n, k) block code from a given collection. The system is modeled by a discrete time queue with an IID batch arrival process and batch service. We obtain a lower bound on the minimum average queue length, which is the optimal value for a linear program, using only the mean (λ) and variance (σ2) of the batch arrivals. For a finite collection of (n, k) codes the minimum achievable average queue length is shown to be Θ(1/ε) as ε ↓ 0 where ε is the difference between the maximum code rate and λ. We obtain a sufficient condition for code rate selection policies to achieve this optimal growth rate. A simple family of policies that use only one block code each as well as two other heuristic policies are shown to be weakly optimal in the sense of achieving the 1/ε growth rate. An appropriate selection from the family of policies that use only one block code each is also shown to achieve the optimal coefficient σ2/2 of the 1/ε growth rate. We compare the performance of the heuristic policies with the minimum achievable average queue length and the lower bound numerically. For a countable collection of (n, k) codes, the optimal average queue length is shown to be Ω(1/ε). We illustrate the selectivity among policies of the growth rate optimality criterion for both finite and countable collections of (n, k) block codes.
Resumo:
Implementation and collapse of exchange rate pegging schemes are recur- rent events. A currency crisis (pegging) is usually followed by an economic downturn (boom). This essay explains why a benevolent government should pursue Þscal and monetary policies that lead to those recurrent currency crises and subsequent periods of pegging. It is shown that the optimal policy induces a competitive equilibrium that displays a boom in periods of below average de- valuation and a recession in periods of above average devaluation. A currency crisis (pegging) can be understood as an optimal policy answer to a recession (boom).
Resumo:
Includes bibliography
Resumo:
Includes bibliography
Resumo:
Expert debate and synthesis of research to inform future management approaches for acute whiplash disorders.
Resumo:
What municipal recycling rate is socially optimal? One credible answer would consider the recycling rate that minimizes the overall social costs of managing municipal waste. Such social costs are comprised of all budgetary costs and revenues associated with operating municipal waste and recycling programs, all costs to recycling households associated with preparing and storing recyclable materials for collection, all external disposal costs associated with waste disposed at landfills or incinerators, and all external benefits associated with the provision of recycled materials that foster environmentally efficient production processes. This paper discusses how to estimate these four components of social cost to then estimate the optimal recycling rate. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Die Vielfalt von möglichen wirtschaftlichen Konsequenzen von Banksolvenzproblemen trägt auch dazu bei, dass wissenschaftliche Fragen über die Eigenkapitalregulierung im Bankensektor schon seit einigen Jahren ziemlich intensiv diskutiert werden. Die Effekte von Eigenkapitalregulierung können sich auf zahlreiche Weise zeigen, zum Beispiel ist ein Effekt auf das Kreditzinsniveau auch nicht auszuschließen. Um diesen potenziellen Zusammenhang, an den die frühere Literatur noch nicht fokussierte, klarer analysieren zu können, wird in der vorliegenden Studie ein theoretisches Modell präsentiert, in der eine Verbindung zwischen einem optimalen Bankkreditzinsniveau und der Eigenkapitalregulierung existiert. Die Optimalität von Kreditzinsniveaus wird aus zwei Aspekten betrachtet: als Optimalitätskriterien werden Gewinnmaximierung und Maximierung von Solvenzwahrscheinlichkeit verglichen. Aufgrund der Ergebnisse kann darauf geschlossen werden, dass diese zwei optimale Kreditzinsniveaus nicht identisch sind und unterschiedlich von Eigenkapitalregulierung beeinflusst werden. Nach theoretischen Ergebnissen ist es möglich, dass im Falle einer Erhöhung des Eigenkapitals bei gleichbleibenden Bankeinlagen das gewinnmaximierende Optimum sich nicht ändert, während das zu der Maximierung der Solvenzwahrscheinlichkeit gehörende Optimum sich verringert.