466 resultados para maximization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tanulmányunkban egy olyan rendszerrel foglalkozunk, amely köztes helyet foglal el a demokrácia és az autokrácia között, mindkettő jegyeit magán viseli, és amelyet ezért áldemokráciának nevezünk. A rendszer működési sajátosságait a járadékok szemszögéből vizsgáljuk, és arra keressük a választ, hogy demokratikus országokban hogyan képes egy párt tartósan domináns pozícióban maradni. Modellünk segítségével összekapcsoljuk a járadékteremtést a szavazatok maximalizálásának céljával, és bemutatjuk, miért jelenthet racionális döntést a hatalmon lévők számára a rövid távú optimumon túlmutató járadékteremtés is. A modell rávilágít arra, hogy a többletjáradékok segítségével a kormányzat - klientúrája megerősítése, a demokratikus rendszer határainak feszegetése, valamint az ellenzék visszaszorítása révén - hosszú távú előnyökre tehet szert, áldemokráciát hozva létre. Történelmi példák jól mutatják, hogy a rendszer bukását végül általában a gyengébb gazdasági teljesítmény és a korrupció széles körűvé válása kényszeríti ki. _____ The paper focuses on a specific political system lying between democracy and autocracy, which has similarities to both. Called here a pseudo-democracy, it is examined from the point of view of rents. The paper enquires how a democracy can allow a single party to dominate the political landscape for a long period. The author constructs a model to link rent creation to vote maximization, arguing that it can be rational for incumbents to increase rents beyond the short-term optimum. The model also reveals that surplus rents may offer long-term gains to an elite by strengthening its clientčle, challenging the systemic political framework and holding back the opposition. Historical examples show that the end-result of a pseudo-democratic system will usually be to weaken economic performance and increase corruption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation discussed resource allocation mechanisms in several network topologies including infrastructure wireless network, non-infrastructure wireless network and wire-cum-wireless network. Different networks may have different resource constrains. Based on actual technologies and implementation models, utility function, game theory and a modern control algorithm have been introduced to balance power, bandwidth and customers' satisfaction in the system. ^ In infrastructure wireless networks, utility function was used in the Third Generation (3G) cellular network and the network was trying to maximize the total utility. In this dissertation, revenue maximization was set as an objective. Compared with the previous work on utility maximization, it is more practical to implement revenue maximization by the cellular network operators. The pricing strategies were studied and the algorithms were given to find the optimal price combination of power and rate to maximize the profit without degrading the Quality of Service (QoS) performance. ^ In non-infrastructure wireless networks, power capacity is limited by the small size of the nodes. In such a network, nodes need to transmit traffic not only for themselves but also for their neighbors, so power management become the most important issue for the network overall performance. Our innovative routing algorithm based on utility function, sets up a flexible framework for different users with different concerns in the same network. This algorithm allows users to make trade offs between multiple resource parameters. Its flexibility makes it a suitable solution for the large scale non-infrastructure network. This dissertation also covers non-cooperation problems. Through combining game theory and utility function, equilibrium points could be found among rational users which can enhance the cooperation in the network. ^ Finally, a wire-cum-wireless network architecture was introduced. This network architecture can support multiple services over multiple networks with smart resource allocation methods. Although a SONET-to-WiMAX case was used for the analysis, the mathematic procedure and resource allocation scheme could be universal solutions for all infrastructure, non-infrastructure and combined networks. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation is a discourse on the capital market and its interactive framework of acquisition and issuance of financial assets that drive the economy from both sides—investors/lenders and issuers/users of capital assets. My work consists of four essays in financial economics that offer a spectrum of revisions to this significant area of study. The first essay is a delineation of the capital market over the past half a century and major developments on capital markets on issues that pertain to the investor's opportunity set and the corporation's capital-raising availability set. This chapter should have merits on two counts: (i) a comprehensive account of capital markets and return-generating assets and (ii) a backdrop against which I present my findings in Chapters 2 through 4. ^ In Chapter 2, I rework on the Markowitz-Roy-Tobin structure of the efficient frontier and of the Separation Theorem. Starting off with a 2-asset portfolio and extending the paradigm to an n-asset portfolio, I bring out the optimal choice of assets for an investor under constrained utility maximization. In this chapter, I analyze the selection and revision-theoretic construct and bring out optimum choices. The effect of a change in perceived risk or return in the mind of an investor is ascertained on the portfolio composition. ^ Chapter 3 takes a look into corporations that issue market securities. The question of how a corporation decides what kinds of securities it should issue in the marketplace to raise funds brings out the classic value invariance proposition of Modigliani and Miller and fills the gap that existed in the literature for almost half a century. I question the general validity in the classic results of Modigliani and Miller and modify the existing literature on the celebrated value invariance proposition. ^ Chapter 4 takes the Modigliani-Miller regime to its correct prescription in the presence of corporate and personal taxes. I show that Modigliani-Miller's age-old proposition needs corrections and extensions, which I derive. ^ My dissertation overall brings all of these corrections and extensions to the existing literature as my findings, showing that capital markets are in an ever-changing state of necessary revision. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first chapter analizes conditional assistance programs. They generate conflicting relationships between international financial institutions (IFIs) and member countries. The experience of IFIs with conditionality in the 1990s led them to allow countries more latitude in the design of their reform programs. A reformist government does not need conditionality and it is useless if it does not want to reform. A government that faces opposition may use conditionality and the help of pro-reform lobbies as a lever to counteract anti-reform groups and succeed in implementing reforms.^ The second chapter analizes economies saddled with taxes and regulations. I consider an economy in which many taxes, subsidies, and other distortionary restrictions are in place simultaneously. If I start from an inefficient laissez-faire equilibrium because of some domestic distortion, a small trade tax or subsidy can yield a first-order welfare improvement, even if the instrument itself creates distortions of its own. This may result in "welfare paradoxes". The purpose of the chapter is to quantify the welfare effects of changes in tax rates in a small open economy. I conduct the simulation in the context of an intertemporal utility maximization framework. I apply numerical methods to the model developed by Karayalcin. I introduce changes in the tax rates and quantify both the impact on welfare, consumption and foreign assets, and the path to the new steady-state values.^ The third chapter studies the role of stock markets and adjustment costs in the international transmission of supply shocks. The analysis of the transmission of a positive supply shock that originates in one of the countries shows that on impact the shock leads to an inmediate stock market boom enjoying the technological advance, while the other country suffers from depress stock market prices as demand for its equity declines. A period of adjustment begins culminating in a steady state capital and output level that is identical to the one before the shock. The the capital stock of one country undergoes a non-monotonic adjustment. The model is tested with plausible values of the variables and the numeric results confirm the predictions of the theory.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fueled by increasing human appetite for high computing performance, semiconductor technology has now marched into the deep sub-micron era. As transistor size keeps shrinking, more and more transistors are integrated into a single chip. This has increased tremendously the power consumption and heat generation of IC chips. The rapidly growing heat dissipation greatly increases the packaging/cooling costs, and adversely affects the performance and reliability of a computing system. In addition, it also reduces the processor's life span and may even crash the entire computing system. Therefore, dynamic thermal management (DTM) is becoming a critical problem in modern computer system design. Extensive theoretical research has been conducted to study the DTM problem. However, most of them are based on theoretically idealized assumptions or simplified models. While these models and assumptions help to greatly simplify a complex problem and make it theoretically manageable, practical computer systems and applications must deal with many practical factors and details beyond these models or assumptions. The goal of our research was to develop a test platform that can be used to validate theoretical results on DTM under well-controlled conditions, to identify the limitations of existing theoretical results, and also to develop new and practical DTM techniques. This dissertation details the background and our research efforts in this endeavor. Specifically, in our research, we first developed a customized test platform based on an Intel desktop. We then tested a number of related theoretical works and examined their limitations under the practical hardware environment. With these limitations in mind, we developed a new reactive thermal management algorithm for single-core computing systems to optimize the throughput under a peak temperature constraint. We further extended our research to a multicore platform and developed an effective proactive DTM technique for throughput maximization on multicore processor based on task migration and dynamic voltage frequency scaling technique. The significance of our research lies in the fact that our research complements the current extensive theoretical research in dealing with increasingly critical thermal problems and enabling the continuous evolution of high performance computing systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a heterogeneous network composed of femtocells deployed within a macrocell network is considered, and a quality-of-service (QoS)-oriented fairness metric which captures important characteristics of tiered network architectures is proposed. Using homogeneous Poisson processes, the sum capacities in such networks are expressed in closed form for co-channel, dedicated channel, and hybrid resource allocation methods. Then a resource splitting strategy that simultaneously considers capacity maximization, fairness constraints, and QoS constraints is proposed. Detailed computer simulations utilizing 3GPP simulation assumptions show that a hybrid allocation strategy with a well-designed resource split ratio enjoys the best cell-edge user performance, with minimal degradation in the sum throughput of macrocell users when compared with that of co-channel operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Infrastructure management agencies are facing multiple challenges, including aging infrastructure, reduction in capacity of existing infrastructure, and availability of limited funds. Therefore, decision makers are required to think innovatively and develop inventive ways of using available funds. Maintenance investment decisions are generally made based on physical condition only. It is important to understand that spending money on public infrastructure is synonymous with spending money on people themselves. This also requires consideration of decision parameters, in addition to physical condition, such as strategic importance, socioeconomic contribution and infrastructure utilization. Consideration of multiple decision parameters for infrastructure maintenance investments can be beneficial in case of limited funding. Given this motivation, this dissertation presents a prototype decision support framework to evaluate trade-off, among competing infrastructures, that are candidates for infrastructure maintenance, repair and rehabilitation investments. Decision parameters' performances measured through various factors are combined to determine the integrated state of an infrastructure using Multi-Attribute Utility Theory (MAUT). The integrated state, cost and benefit estimates of probable maintenance actions are utilized alongside expert opinion to develop transition probability and reward matrices for each probable maintenance action for a particular candidate infrastructure. These matrices are then used as an input to the Markov Decision Process (MDP) for the finite-stage dynamic programming model to perform project (candidate)-level analysis to determine optimized maintenance strategies based on reward maximization. The outcomes of project (candidate)-level analysis are then utilized to perform network-level analysis taking the portfolio management approach to determine a suitable portfolio under budgetary constraints. The major decision support outcomes of the prototype framework include performance trend curves, decision logic maps, and a network-level maintenance investment plan for the upcoming years. The framework has been implemented with a set of bridges considered as a network with the assistance of the Pima County DOT, AZ. It is expected that the concept of this prototype framework can help infrastructure management agencies better manage their available funds for maintenance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first chapter analizes conditional assistance programs. They generate conflicting relationships between international financial institutions (IFIs) and member countries. The experience of IFIs with conditionality in the 1990s led them to allow countries more latitude in the design of their reform programs. A reformist government does not need conditionality and it is useless if it does not want to reform. A government that faces opposition may use conditionality and the help of pro-reform lobbies as a lever to counteract anti-reform groups and succeed in implementing reforms. The second chapter analizes economies saddled with taxes and regulations. I consider an economy in which many taxes, subsidies, and other distortionary restrictions are in place simultaneously. If I start from an inefficient laissez-faire equilibrium because of some domestic distortion, a small trade tax or subsidy can yield a first-order welfare improvement, even if the instrument itself creates distortions of its own. This may result in "welfare paradoxes". The purpose of the chapter is to quantify the welfare effects of changes in tax rates in a small open economy. I conduct the simulation in the context of an intertemporal utility maximization framework. I apply numerical methods to the model developed by Karayalcin. I introduce changes in the tax rates and quantify both the impact on welfare, consumption and foreign assets, and the path to the new steady-state values. The third chapter studies the role of stock markets and adjustment costs in the international transmission of supply shocks. The analysis of the transmission of a positive supply shock that originates in one of the countries shows that on impact the shock leads to an inmediate stock market boom enjoying the technological advance, while the other country suffers from depress stock market prices as demand for its equity declines. A period of adjustment begins culminating in a steady state capital and output level that is identical to the one before the shock. The the capital stock of one country undergoes a non-monotonic adjustment. The model is tested with plausible values of the variables and the numeric results confirm the predictions of the theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The spread of wireless networks and growing proliferation of mobile devices require the development of mobility control mechanisms to support the different demands of traffic in different network conditions. A major obstacle to developing this kind of technology is the complexity involved in handling all the information about the large number of Moving Objects (MO), as well as the entire signaling overhead required to manage these procedures in the network. Despite several initiatives have been proposed by the scientific community to address this issue they have not proved to be effective since they depend on the particular request of the MO that is responsible for triggering the mobility process. Moreover, they are often only guided by wireless medium statistics, such as Received Signal Strength Indicator (RSSI) of the candidate Point of Attachment (PoA). Thus, this work seeks to develop, evaluate and validate a sophisticated communication infrastructure for Wireless Networking for Moving Objects (WiNeMO) systems by making use of the flexibility provided by the Software-Defined Networking (SDN) paradigm, where network functions are easily and efficiently deployed by integrating OpenFlow and IEEE 802.21 standards. For purposes of benchmarking, the analysis was conducted in the control and data planes aspects, which demonstrate that the proposal significantly outperforms typical IPbased SDN and QoS-enabled capabilities, by allowing the network to handle the multimedia traffic with optimal Quality of Service (QoS) transport and acceptable Quality of Experience (QoE) over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In our daily lives, we often must predict how well we are going to perform in the future based on an evaluation of our current performance and an assessment of how much we will improve with practice. Such predictions can be used to decide whether to invest our time and energy in learning and, if we opt to invest, what rewards we may gain. This thesis investigated whether people are capable of tracking their own learning (i.e. current and future motor ability) and exploiting that information to make decisions related to task reward. In experiment one, participants performed a target aiming task under a visuomotor rotation such that they initially missed the target but gradually improved. After briefly practicing the task, they were asked to select rewards for hits and misses applied to subsequent performance in the task, where selecting a higher reward for hits came at a cost of receiving a lower reward for misses. We found that participants made decisions that were in the direction of optimal and therefore demonstrated knowledge of future task performance. In experiment two, participants learned a novel target aiming task in which they were rewarded for target hits. Every five trials, they could choose a target size which varied inversely with reward value. Although participants’ decisions deviated from optimal, a model suggested that they took into account both past performance, and predicted future performance, when making their decisions. Together, these experiments suggest that people are capable of tracking their own learning and using that information to make sensible decisions related to reward maximization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Demand response (DR) algorithms manipulate the energy consumption schedules of controllable loads so as to satisfy grid objectives. Implementation of DR algorithms using a centralized agent can be problematic for scalability reasons, and there are issues related to the privacy of data and robustness to communication failures. Thus, it is desirable to use a scalable decentralized algorithm for the implementation of DR. In this paper, a hierarchical DR scheme is proposed for peak minimization based on Dantzig-Wolfe decomposition (DWD). In addition, a time weighted maximization option is included in the cost function, which improves the quality of service for devices seeking to receive their desired energy sooner rather than later. This paper also demonstrates how the DWD algorithm can be implemented more efficiently through the calculation of the upper and lower cost bounds after each DWD iteration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a linear precoder design for an underlay cognitive radio multiple-input multiple-output broadcast channel, where the secondary system consisting of a secondary base-station (BS) and a group of secondary users (SUs) is allowed to share the same spectrum with the primary system. All the transceivers are equipped with multiple antennas, each of which has its own maximum power constraint. Assuming zero-forcing method to eliminate the multiuser interference, we study the sum rate maximization problem for the secondary system subject to both per-antenna power constraints at the secondary BS and the interference power constraints at the primary users. The problem of interest differs from the ones studied previously that often assumed a sum power constraint and/or single antenna employed at either both the primary and secondary receivers or the primary receivers. To develop an efficient numerical algorithm, we first invoke the rank relaxation method to transform the considered problem into a convex-concave problem based on a downlink-uplink result. We then propose a barrier interior-point method to solve the resulting saddle point problem. In particular, in each iteration of the proposed method we find the Newton step by solving a system of discrete-time Sylvester equations, which help reduce the complexity significantly, compared to the conventional method. Simulation results are provided to demonstrate fast convergence and effectiveness of the proposed algorithm.