951 resultados para Efficient capital allocation
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
There are many popular models available for classification of documents like Naïve Bayes Classifier, k-Nearest Neighbors and Support Vector Machine. In all these cases, the representation is based on the “Bag of words” model. This model doesn't capture the actual semantic meaning of a word in a particular document. Semantics are better captured by proximity of words and their occurrence in the document. We propose a new “Bag of Phrases” model to capture this discriminative power of phrases for text classification. We present a novel algorithm to extract phrases from the corpus using the well known topic model, Latent Dirichlet Allocation(LDA), and to integrate them in vector space model for classification. Experiments show a better performance of classifiers with the new Bag of Phrases model against related representation models.
Resumo:
Single receive antenna selection (AS) is a popular method for obtaining diversity benefits without the additional costs of multiple radio receiver chains. Since only one antenna receives at any time, the transmitter sends a pilot multiple times to enable the receiver to estimate the channel gains of its N antennas to the transmitter and select an antenna. In time-varying channels, the channel estimates of different antennas are outdated to different extents. We analyze the symbol error probability (SEP) in time-varying channels of the N-pilot and (N+1)-pilot AS training schemes. In the former, the transmitter sends one pilot for each receive antenna. In the latter, the transmitter sends one additional pilot that helps sample the channel fading process of the selected antenna twice. We present several new results about the SEP, optimal energy allocation across pilots and data, and optimal selection rule in time-varying channels for the two schemes. We show that due to the unique nature of AS, the (N+1)-pilot scheme, despite its longer training duration, is much more energy-efficient than the conventional N-pilot scheme. An extension to a practical scenario where all data symbols of a packet are received by the same antenna is also investigated.
Resumo:
Femtocells are a new concept which improves the coverage and capacity of a cellular system. We consider the problem of channel allocation and power control to different users within a Femtocell. Knowing the channels available, the channel states and the rate requirements of different users the Femtocell base station (FBS), allocates the channels to different users to satisfy their requirements. Also, the Femtocell should use minimal power so as to cause least interference to its neighboring Femtocells and outside users. We develop efficient, low complexity algorithms which can be used online by the Femtocell. The users may want to transmit data or voice. We compare our algorithms with the optimal solutions.
Resumo:
We address the problem of passive eavesdroppers in multi-hop wireless networks using the technique of friendly jamming. The network is assumed to employ Decode and Forward (DF) relaying. Assuming the availability of perfect channel state information (CSI) of legitimate nodes and eavesdroppers, we consider a scheduling and power allocation (PA) problem for a multiple-source multiple-sink scenario so that eavesdroppers are jammed, and source-destination throughput targets are met while minimizing the overall transmitted power. We propose activation sets (AS-es) for scheduling, and formulate an optimization problem for PA. Several methods for finding AS-es are discussed and compared. We present an approximate linear program for the original nonlinear, non-convex PA optimization problem, and argue that under certain conditions, both the formulations produce identical results. In the absence of eavesdroppers' CSI, we utilize the notion of Vulnerability Region (VR), and formulate an optimization problem with the objective of minimizing the VR. Our results show that the proposed solution can achieve power-efficient operation while defeating eavesdroppers and achieving desired source-destination throughputs simultaneously. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
We consider optimal average power allocation policies in a wireless channel in the presence of individual delay constraints on the transmitted packets. Power is consumed in transmission of data only. We consider the case when the power used in transmission is a linear function of the data transmitted. The transmission channel may experience multipath fading. We have developed a computationally efficient online algorithm, when there is same hard delay constraint for all packets. Later on, we generalize it to the case when there are multiple real time streams with different hard deadline constraints. Our algorithm uses linear programming and has very low complexity.
Resumo:
This dissertation contains three essays on mechanism design. The common goal of these essays is to assist in the solution of different resource allocation problems where asymmetric information creates obstacles to the efficient allocation of resources. In each essay, we present a mechanism that satisfactorily solves the resource allocation problem and study some of its properties. In our first essay, ”Combinatorial Assignment under Dichotomous Preferences”, we present a class of problems akin to time scheduling without a pre-existing time grid, and propose a mechanism that is efficient, strategy-proof and envy-free. Our second essay, ”Monitoring Costs and the Management of Common-Pool Resources”, studies what can happen to an existing mechanism — the individual tradable quotas (ITQ) mechanism, also known as the cap-and-trade mechanism — when quota enforcement is imperfect and costly. Our third essay, ”Vessel Buyback”, coauthored with John O. Ledyard, presents an auction design that can be used to buy back excess capital in overcapitalized industries.
Resumo:
Esta pesquisa teve por objetivo abrir uma discussão sobre o papel do esporte contemporâneo junto ao processo de alienação humana em tempos de domínio do capitalismo monopolista e do fortalecimento da ideologia dominante. Para tal, no primeiro capítulo, analisou-se as principais transformações vividas historicamente pelo capitalismo com a intenção de identificar o impacto do capitalismo monopolista sobre o novo ordenamento da humanidade. No segundo capítulo, demonstrou-se como o esporte contemporâneo constituiu-se como uma instituição burguesa, socialmente determinada e integrada ao conjunto de normas, ideias e estratégias inerentes ao modo de produção capitalista, participando do processo de mascaramento da questão social. Destaca-se neste capítulo o uso de fontes documentais que demonstraram como o esporte contemporâneo tem ocupado lugar estratégico tanto junto à produção da ideologia dominante, quanto junto ao controle da queda da taxa de lucro. Identificou-se que sob tais condições o esporte contemporâneo compõe os processos compensatórios frente à queda tendencial da taxa de lucro e, ao mesmo tempo, integra-se ao processo de alienação humana, tendo por maior expressão a sua materialização sob a forma dos megaeventos esportivos. Neste ponto, a pesquisa concentra-se na análise dos megaeventos esportivos no Brasil e na criação das políticas do esporte, desde o primeiro governo Lula da Silva até os dias atuais. Identificou-se que os projetos de desenvolvimento do esporte no país, no período em tela, têm participado do processo de gerenciamento da crise do capital e do refluxo das lutas dos trabalhadores. O último capítulo abordou as particularidades que envolvem a ideologia pós-moderna, tendo por objetivo identificar as relações desta com fenômeno esportivo. Constatou-se que, em tempos de domínio do capitalismo monopolista e de suas políticas neoliberais, as contradições que aguçam o processo de alienação sob o qual encontra-se a classe trabalhadora de todo o mundo, coloca a humanidade em um novo patamar de alienação, ainda mais brutal e desumanizador. Nesta conjuntura, o esporte contemporâneo destaca-se por ser funcional tanto ao mercado globalizado, quanto ao projeto imperialista, impondo-se como instrumento da contenção de conflitos em nome da tolerância e da paz no mundo. A presente pesquisa pôde concluir que as condições impostas pela fase monopolista do capitalismo ocultam a natureza dialética do esporte transforma-o num instrumento eficiente ao projeto dominante de incremento da alienação humana. O esporte, sob a forma assumida na contemporaneidade, não contribui para o avanço da consciência da classe trabalhadora, pois vem colaborando para adiamento do projeto de emancipação da humanidade. Projeto este que só será produzido pela organização consciente da classe trabalhadora em busca da superação do modo de produção capitalista.
Resumo:
Power allocation is studied for fixed-rate transmission over block-fading channels with arbitrary continuous fading distributions and perfect transmitter and receiver channel state information. Both short- and long-term power constraints for arbitrary input distributions are considered. Optimal power allocation schemes are shown to be direct applications of previous results in the literature. It is shown that the short- and long-term outage exponents for arbitrary input distributions are related through a simple formula. The formula is useful to predict when the delay-limited capacity is positive. Furthermore, this characterization is useful for the design of efficient coding schemes for this relevant channel model. © 2010 IEEE.
Resumo:
Conventional parallel computer architectures do not provide support for non-uniformly distributed objects. In this thesis, I introduce sparsely faceted arrays (SFAs), a new low-level mechanism for naming regions of memory, or facets, on different processors in a distributed, shared memory parallel processing system. Sparsely faceted arrays address the disconnect between the global distributed arrays provided by conventional architectures (e.g. the Cray T3 series), and the requirements of high-level parallel programming methods that wish to use objects that are distributed over only a subset of processing elements. A sparsely faceted array names a virtual globally-distributed array, but actual facets are lazily allocated. By providing simple semantics and making efficient use of memory, SFAs enable efficient implementation of a variety of non-uniformly distributed data structures and related algorithms. I present example applications which use SFAs, and describe and evaluate simple hardware mechanisms for implementing SFAs. Keeping track of which nodes have allocated facets for a particular SFA is an important task that suggests the need for automatic memory management, including garbage collection. To address this need, I first argue that conventional tracing techniques such as mark/sweep and copying GC are inherently unscalable in parallel systems. I then present a parallel memory-management strategy, based on reference-counting, that is capable of garbage collecting sparsely faceted arrays. I also discuss opportunities for hardware support of this garbage collection strategy. I have implemented a high-level hardware/OS simulator featuring hardware support for sparsely faceted arrays and automatic garbage collection. I describe the simulator and outline a few of the numerous details associated with a "real" implementation of SFAs and SFA-aware garbage collection. Simulation results are used throughout this thesis in the evaluation of hardware support mechanisms.
Resumo:
High-speed networks, such as ATM networks, are expected to support diverse Quality of Service (QoS) constraints, including real-time QoS guarantees. Real-time QoS is required by many applications such as those that involve voice and video communication. To support such services, routing algorithms that allow applications to reserve the needed bandwidth over a Virtual Circuit (VC) have been proposed. Commonly, these bandwidth-reservation algorithms assign VCs to routes using the least-loaded concept, and thus result in balancing the load over the set of all candidate routes. In this paper, we show that for such reservation-based protocols|which allow for the exclusive use of a preset fraction of a resource's bandwidth for an extended period of time-load balancing is not desirable as it results in resource fragmentation, which adversely affects the likelihood of accepting new reservations. In particular, we show that load-balancing VC routing algorithms are not appropriate when the main objective of the routing protocol is to increase the probability of finding routes that satisfy incoming VC requests, as opposed to equalizing the bandwidth utilization along the various routes. We present an on-line VC routing scheme that is based on the concept of "load profiling", which allows a distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. We show the effectiveness of our load-profiling approach when compared to traditional load-balancing and load-packing VC routing schemes.
Inclusive education policy, the general allocation model and dilemmas of practice in primary schools
Resumo:
Background: Inclusive education is central to contemporary discourse internationally reflecting societies’ wider commitment to social inclusion. Education has witnessed transforming approaches that have created differing distributions of power, resource allocation and accountability. Multiple actors are being forced to consider changes to how key services and supports are organised. This research constitutes a case study situated within this broader social service dilemma of how to distribute finite resources equitably to meet individual need, while advancing inclusion. It focuses on the national directive with regard to inclusive educational practice for primary schools, Department of Education and Science Special Education Circular 02/05, which introduced the General Allocation Model (GAM) within the legislative context of the Education of Persons with Special Educational Needs (EPSEN) Act (Government of Ireland, 2004). This research could help to inform policy with ‘facts about what is happening on the ground’ (Quinn, 2013). Research Aims: The research set out to unearth the assumptions and definitions embedded within the policy document, to analyse how those who are at the coalface of policy, and who interface with multiple interests in primary schools, understand the GAM and respond to it, and to investigate its effects on students and their education. It examines student outcomes in the primary schools where the GAM was investigated. Methods and Sample The post-structural study acknowledges the importance of policy analysis which explicitly links the ‘bigger worlds’ of global and national policy contexts to the ‘smaller worlds’ of policies and practices within schools and classrooms. This study insists upon taking the detail seriously (Ozga, 1990). A mixed methods approach to data collection and analysis is applied. In order to secure the perspectives of key stakeholders, semi-structured interviews were conducted with primary school principals, class teachers and learning support/resource teachers (n=14) in three distinct mainstream, non-DEIS schools. Data from the schools and their environs provided a profile of students. The researcher then used the Pobal Maps Facility (available at www.pobal.ie) to identify the Small Area (SA) in which each student resides, and to assign values to each address based on the Pobal HP Deprivation Index (Haase and Pratschke, 2012). Analysis of the datasets, guided by the conceptual framework of the policy cycle (Ball, 1994), revealed a number of significant themes. Results: Data illustrate that the main model to support student need is withdrawal from the classroom under policy that espouses inclusion. Quantitative data, in particular, highlighted an association between segregated practice and lower socioeconomic status (LSES) backgrounds of students. Up to 83% of the students in special education programmes are from lower socio-economic status (LSES) backgrounds. In some schools 94% of students from LSES backgrounds are withdrawn from classrooms daily for special education. While the internal processes of schooling are not solely to blame for class inequalities, this study reveals the power of professionals to order children in school, which has implications for segregated special education practice. Such agency on the part of key actors in the context of practice relates to ‘local constructions of dis/ability’, which is influenced by teacher habitus (Bourdieu, 1984). The researcher contends that inclusive education has not resulted in positive outcomes for students from LSES backgrounds because it is built on faulty assumptions that focus on a psycho-medical perspective of dis/ability, that is, placement decisions do not consider the intersectionality of dis/ability with class or culture. This study argues that the student need for support is better understood as ‘home/school discontinuity’ not ‘disability’. Moreover, the study unearths the power of some parents to use social and cultural capital to ensure eligibility to enhanced resources. Therefore, a hierarchical system has developed in mainstream schools as a result of funding models to support need in inclusive settings. Furthermore, all schools in the study are ‘ordinary’ schools yet participants acknowledged that some schools are more ‘advantaged’, which may suggest that ‘ordinary’ schools serve to ‘bury class’ (Reay, 2010) as a key marker in allocating resources. The research suggests that general allocation models of funding to meet the needs of students demands a systematic approach grounded in reallocating funds from where they have less benefit to where they have more. The calculation of the composite Haase Value in respect of the student cohort in receipt of special education support adopted for this study could be usefully applied at a national level to ensure that the greatest level of support is targeted at greatest need. Conclusion: In summary, the study reveals that existing structures constrain and enable agents, whose interactions produce intended and unintended consequences. The study suggests that policy should be viewed as a continuous and evolving cycle (Ball, 1994) where actors in each of the social contexts have a shared responsibility in the evolution of education that is equitable, excellent and inclusive.
Resumo:
In a deregulated power system, it is usually required to determine the shares of each load and generation in line flows, to permit fair allocation of transmission costs between the interested parties. The paper presents a new method of determining the contributions of each load to line flows and losses. The method is based on power-flow topology and has the advantage of being the least computationally demanding of similar methods.
Resumo:
Computionally efficient sequential learning algorithms are developed for direct-link resource-allocating networks (DRANs). These are achieved by decomposing existing recursive training algorithms on a layer by layer and neuron by neuron basis. This allows network weights to be updated in an efficient parallel manner and facilitates the implementation of minimal update extensions that yield a significant reduction in computation load per iteration compared to existing sequential learning methods employed in resource-allocation network (RAN) and minimal RAN (MRAN) approaches. The new algorithms, which also incorporate a pruning strategy to control network growth, are evaluated on three different system identification benchmark problems and shown to outperform existing methods both in terms of training error convergence and computational efficiency. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We analyze a two-sector growth model with directed technical change where man-made capital and exhaustible resources are essential for production. The relative profitability of factor-specific innovations endogenously determines whether technical progress will be capital- or resource-augmenting. We show that any balanced growth equilibrium features purely resource-augmenting technical change. This result is compatible with alternative specifications of preferences and innovation technologies, as it hinges on the interplay between productive efficiency in the final sector, and the Hotelling rule characterizing the efficient depletion path for the exhaustible resource. Our result provides sound micro-foundations for the broad class of models of exogenous/endogenous growth where resource-augmenting progress is required to sustain consumption in the long run, contradicting the view that these models are conceptually biased in favor of sustainability.