967 resultados para Design Automation
Resumo:
Two different designs for negative binary adder-subtracter are compared. Ono design uses the method of a hybrid-carry—borrow, while the other 11303 the method of polarization and addition.
Resumo:
This article addresses the problem of how to select the optimal combination of sensors and how to determine their optimal placement in a surveillance region in order to meet the given performance requirements at a minimal cost for a multimedia surveillance system. We propose to solve this problem by obtaining a performance vector, with its elements representing the performances of subtasks, for a given input combination of sensors and their placement. Then we show that the optimal sensor selection problem can be converted into the form of Integer Linear Programming problem (ILP) by using a linear model for computing the optimal performance vector corresponding to a sensor combination. Optimal performance vector corresponding to a sensor combination refers to the performance vector corresponding to the optimal placement of a sensor combination. To demonstrate the utility of our technique, we design and build a surveillance system consisting of PTZ (Pan-Tilt-Zoom) cameras and active motion sensors for capturing faces. Finally, we show experimentally that optimal placement of sensors based on the design maximizes the system performance.
Resumo:
In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.
Resumo:
In this paper we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we design a novel auction which we call the OPT (optimal) auction. The OPT mechanism maximizes the search engine's expected revenue while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We show that the OPT mechanism is superior to two of the most commonly used mechanisms for sponsored search namely (1) GSP (Generalized Second Price) and (2) VCG (Vickrey-Clarke-Groves). We then show an important revenue equivalence result that the expected revenue earned by the search engine is the same for all the three mechanisms provided the advertisers are symmetric and the number of sponsored slots is strictly less than the number of advertisers.
Resumo:
With the increasing adoption of wireless technology, it is reasonable to expect an increase in file demand for supporting both real-time multimedia and high rate reliable data services. Next generation wireless systems employ Orthogonal Frequency Division Multiplexing (OFDM) physical layer owing, to the high data rate transmissions that are possible without increase in bandwidth. Towards improving file performance of these systems, we look at the design of resource allocation algorithms at medium-access layer, and their impact on higher layers. While TCP-based clastic traffic needs reliable transport, UDP-based real-time applications have stringent delay and rate requirements. The MAC algorithms while catering to the heterogeneous service needs of these higher layers, tradeoff between maximizing the system capacity and providing fairness among users. The novelly of this work is the proposal of various channel-aware resource allocation algorithms at the MAC layer. which call result in significant performance gains in an OFDM based wireless system.
Resumo:
In this thesis work, we design rigorous and efficient protocols/mechanisms for different types of wireless networks using a mechanism design [1] and game theoretic approach [2]. Our work can broadly be viewed in two parts. In the first part, we concentrate on ad hoc wireless networks [3] and [4]. In particular, we consider broadcast in these networks where each node is owned by independent and selfish users. Being selfish, these nodes do not forward the broadcast packets. All existing protocols for broadcast assume that nodes forward the transit packets. So, there is need for developing new broadcast protocols to overcome node selfishness. In our paper [5], we develop a strategy proof pricing mechanism which we call immediate predecessor node pricing mechanism (IPNPM) and an efficient new broadcast protocol based on IPNPM. We show the efficacy of our proposed broadcast protocol using simulation results.
Resumo:
802.11 WLANs are characterized by high bit error rate and frequent changes in network topology. The key feature that distinguishes WLANs from wired networks is the multi-rate transmission capability, which helps to accommodate a wide range of channel conditions. This has a significant impact on higher layers such as routing and transport levels. While many WLAN products provide rate control at the hardware level to adapt to the channel conditions, some chipsets like Atheros do not have support for automatic rate control. We first present a design and implementation of an FER-based automatic rate control state machine, which utilizes the statistics available at the device driver to find the optimal rate. The results show that the proposed rate switching mechanism adapts quite fast to the channel conditions. The hop count metric used by current routing protocols has proven itself for single rate networks. But it fails to take into account other important factors in a multi-rate network environment. We propose transmission time as a better path quality metric to guide routing decisions. It incorporates the effects of contention for the channel, the air time to send the data and the asymmetry of links. In this paper, we present a new design for a multi-rate mechanism as well as a new routing metric that is responsive to the rate. We address the issues involved in using transmission time as a metric and presents a comparison of the performance of different metrics for dynamic routing.
Resumo:
An important issue in the design of a distributed computing system (DCS) is the development of a suitable protocol. This paper presents an effort to systematize the protocol design procedure for a DCS. Protocol design and development can be divided into six phases: specification of the DCS, specification of protocol requirements, protocol design, specification and validation of the designed protocol, performance evaluation, and hardware/software implementation. This paper describes techniques for the second and third phases, while the first phase has been considered by the authors in their earlier work. Matrix and set theoretic based approaches are used for specification of a DCS and for specification of the protocol requirements. These two formal specification techniques form the basis of the development of a simple and straightforward procedure for the design of the protocol. The applicability of the above design procedure has been illustrated by considering an example of a computing system encountered on board a spacecraft. A Petri-net based approach has been adopted to model the protocol. The methodology developed in this paper can be used in other DCS applications.
Resumo:
In this paper, we propose a systolic architecture for hidden-surface removal. Systolic architecture is a kind of parallel architecture best known for its easy VLSI implementability. After discussing the design details of the architecture, we present the results of the simulation experiments conducted in order to evaluate the performance of the architecture.
Resumo:
Submergence of land is a major impact of large hydropower projects. Such projects are often also dogged by siltation, delays in construction and heavy debt burdens-factors that are not considered in the project planning exercise. A simple constrained optimization model for the benefit~ost analysis of large hydropower projects that considers these features is proposed. The model is then applied to two sites in India. Using the potential productivity of an energy plantation on the submergible land is suggested as a reasonable approach to estimating the opportunity cost of submergence. Optimum project dimensions are calculated for various scenarios. Results indicate that the inclusion of submergence cost may lead to a substanual reduction in net present value and hence in project viability. Parameters such as project lifespan, con$truction time, discount rate and external debt burden are also of significance. The designs proposed by the planners are found to be uneconomic, whIle even the optimal design may not be viable for more typical scenarios. The concept of energy opportunity cost is useful for preliminary screening; some projects may require more detailed calculations. The optimization approach helps identify significant trade-offs between energy generation and land availability.
Resumo:
Our main result is a new sequential method for the design of decentralized control systems. Controller synthesis is conducted on a loop-by-loop basis, and at each step the designer obtains an explicit characterization of the class C of all compensators for the loop being closed that results in closed-loop system poles being in a specified closed region D of the s-plane, instead of merely stabilizing the closed-loop system. Since one of the primary goals of control system design is to satisfy basic performance requirements that are often directly related to closed-loop pole location (bandwidth, percentage overshoot, rise time, settling time), this approach immediately allows the designer to focus on other concerns such as robustness and sensitivity. By considering only compensators from class C and seeking the optimum member of that set with respect to sensitivity or robustness, the designer has a clearly-defined limited optimization problem to solve without concern for loss of performance. A solution to the decentralized tracking problem is also provided. This design approach has the attractive features of expandability, the use of only 'local models' for controller synthesis, and fault tolerance with respect to certain types of failure.
Resumo:
The importance of long-range prediction of rainfall pattern for devising and planning agricultural strategies cannot be overemphasized. However, the prediction of rainfall pattern remains a difficult problem and the desired level of accuracy has not been reached. The conventional methods for prediction of rainfall use either dynamical or statistical modelling. In this article we report the results of a new modelling technique using artificial neural networks. Artificial neural networks are especially useful where the dynamical processes and their interrelations for a given phenomenon are not known with sufficient accuracy. Since conventional neural networks were found to be unsuitable for simulating and predicting rainfall patterns, a generalized structure of a neural network was then explored and found to provide consistent prediction (hindcast) of all-India annual mean rainfall with good accuracy. Performance and consistency of this network are evaluated and compared with those of other (conventional) neural networks. It is shown that the generalized network can make consistently good prediction of annual mean rainfall. Immediate application and potential of such a prediction system are discussed.
Resumo:
Electronic Exchanges are double-sided marketplaces that allows multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. In this paper, we propose a new design approach for an one-shot exchange that collects bids from buyers and sellers and clears the market at the end of the bidding period. The main principle of the approach is to decouple the allocation from pricing. It is well known that it is impossible for an exchange with voluntary participation to be efficient and budget-balanced. Budget-balance is a mandatory requirement for an exchange to operate in profit. Our approach is to allocate the trade to maximize the reported values of the agents. The pricing is posed as payoff determination problem that distributes the total payoff fairly to all agents with budget-balance imposed as a constraint. We devise an arbitration scheme by axiomatic approach to solve the payoff determination problem using the added-value concept of game theory.
Resumo:
Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems.A sensor network operating system is a kind of embedded operating system, but unlike a typical embedded operating system, sensor network operatin g system may not be real time, and is constrained by memory and energy constraints. Most sensor network operating systems are based on event-driven approach. Event-driven approach is efficient in terms of time and space.Also this approach does not require a separate stack for each execution context. But using this model, it is difficult to implement long running tasks, like cryptographic operations. A thread based computation requires a separate stack for each execution context, and is less efficient in terms of time and space. In this paper, we propose a thread based execution model that uses only a fixed number of stacks. In this execution model, the number of stacks at each priority level are fixed. It minimizes the stack requirement for multi-threading environment and at the same time provides ease of programming. We give an implementation of this model in Contiki OS by separating thread implementation from protothread implementation completely. We have tested our OS by implementing a clock synchronization protocol using it.
Resumo:
A new scheme for robust estimation of the partial state of linear time-invariant multivariable systems is presented, and it is shown how this may be used for the detection of sensor faults in such systems. We consider an observer to be robust if it generates a faithful estimate of the plant state in the face of modelling uncertainty or plant perturbations. Using the Stable Factorization approach we formulate the problem of optimal robust observer design by minimizing an appropriate norm on the estimation error. A logical candidate is the 2-norm, corresponding to an H�¿ optimization problem, for which solutions are readily available. In the special case of a stable plant, the optimal fault diagnosis scheme reduces to an internal model control architecture.