958 resultados para Cost allocation
Resumo:
In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.
Resumo:
This paper presents an analysis and discussion, based on cooperative game theory, for the allocation of the cost of losses to generators and demands in transmission systems. We construct a cooperative game theory model in which the players are represented by equivalent bilateral exchanges and we search for a unique loss allocation solution, the Core. Other solution concepts, such as the Shapley Value, the Bilateral Shapley Value and the Kernel are also explored. Our main objective is to illustrate why is not possible to find an optimal solution for allocating the cost of losses to the users of a network. Results and relevant conclusions are presented for a 4-bus system and a 14-bus system. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a mixed-integer linear programming approach to solving the problem of optimal type, size and allocation of distributed generators (DGs) in radial distribution systems. In the proposed formulation, (a) the steady-state operation of the radial distribution system, considering different load levels, is modeled through linear expressions; (b) different types of DGs are represented by their capability curves; (c) the short-circuit current capacity of the circuits is modeled through linear expressions; and (d) different topologies of the radial distribution system are considered. The objective function minimizes the annualized investment and operation costs. The use of a mixed-integer linear formulation guarantees convergence to optimality using existing optimization software. The results of one test system are presented in order to show the accuracy as well as the efficiency of the proposed solution technique.© 2012 Elsevier B.V. All rights reserved.
Resumo:
ABSTRACT: The femtocell concept aims to combine fixed-line broadband access with mobile telephony using the deployment of low-cost, low-power third and fourth generation base stations in the subscribers' homes. While the self-configuration of femtocells is a plus, it can limit the quality of service (QoS) for the users and reduce the efficiency of the network, based on outdated allocation parameters such as signal power level. To this end, this paper presents a proposal for optimized allocation of users on a co-channel macro-femto network, that enable self-configuration and public access, aiming to maximize the quality of service of applications and using more efficiently the available energy, seeking the concept of Green networking. Thus, when the user needs to connect to make a voice or a data call, the mobile phone has to decide which network to connect, using the information of number of connections, the QoS parameters (packet loss and throughput) and the signal power level of each network. For this purpose, the system is modeled as a Markov Decision Process, which is formulated to obtain an optimal policy that can be applied on the mobile phone. The policy created is flexible, allowing different analyzes, and adaptive to the specific characteristics defined by the telephone company. The results show that compared to traditional QoS approaches, the policy proposed here can improve energy efficiency by up to 10%.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Suppliers of water and energy are frequently natural monopolies, with their pricing regulated by governmental agencies. Pricing schemes are evaluated by the efficiency of the resource allocation they lead to, the capacity of the utilities to capture their costs and the distributional effects of the policies, in particular, impacts on the poor. One pricing approach has been average cost pricing, which guarantees cost recovery and allows utilities to provide their product at relatively low prices. However, average cost pricing leads to economically inefficient consumption levels, when sources of water and energy are limited and increasing the supply is costly. An alternative approach is increasing block rates (hereafter, IBR or tiered pricing), where individuals pay a low rate for an initial consumption block and a higher rate as they increase use beyond that block. An example of IBR is shown in Figure 1 (on next page), which shows a rate structure for residential water use. With the rates in Figure 1, a household would be charged $0.46 and $0.71 per hundred gallons for consumption below and above 21,000 gallons per month, respectively.
Resumo:
A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.
Resumo:
A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^
Resumo:
This study compares the procurement cost-minimizing and productive efficiency performance of the auction mechanism used by independent system operators (ISOs) in wholesale electricity auction markets in the U.S. with that of a proposed alternative. The current practice allocates energy contracts as if the auction featured a discriminatory final payment method when, in fact, the markets are uniform price auctions. The proposed alternative explicitly accounts for the market clearing price during the allocation phase. We find that the proposed alternative largely outperforms the current practice on the basis of procurement costs in the context of simple auction markets featuring both day-ahead and real-time auctions and that the procurement cost advantage of the alternative is complete when we simulate the effects of increased competition. We also find that a trade-off between the objectives of procurement cost minimization and productive efficiency emerges in our simple auction markets and persists in the face of increased competition.
Resumo:
This study of the wholesale electricity market compares the efficiency performance of the auction mechanism currently in place in U.S. markets with the performance of a proposed mechanism. The analysis highlights the importance of considering strategic behavior when comparing different institutional systems. We find that in concentrated markets, neither auction mechanism can guarantee an efficient allocation. The advantage of the current mechanism increases with increased price competition if market demand is perfectly inelastic. However, if market demand has some responsiveness to price, the superiority of the current auction with respect to efficiency is not that obvious. We present a case where the proposed auction outperforms the current mechanism on efficiency even if all offers reflect true production costs. We also find that a market designer might face a choice problem with a tradeoff between lower electricity cost and production efficiency. Some implications for social welfare are discussed as well.
Resumo:
We propose a nonparametric model for global cost minimization as a framework for optimal allocation of a firm's output target across multiple locations, taking account of differences in input prices and technologies across locations. This should be useful for firms planning production sites within a country and for foreign direct investment decisions by multi-national firms. Two illustrative examples are included. The first example considers the production location decision of a manufacturing firm across a number of adjacent states of the US. In the other example, we consider the optimal allocation of US and Canadian automobile manufacturers across the two countries.
Resumo:
The research project is an extension of the economic theory to the health care field and health care research projects evaluating the influence of demand and supply variables upon medical care inflation. The research tests a model linking the demographic and socioeconomic characteristics of the population, its community case mix, and technology, the prices of goods and services other than medical care, the way its medical services are delivered and the health care resources available to its population to different utilization patterns which, consequently, lead to variations in health care prices among metropolitan areas. The research considers the relationship of changes in community characteristics and resources and medical care inflation.^ The rapidly increasing costs of medical care have been of great concern to the general public, medical profession, and political bodies. Research and analysis of the main factors responsible for the rate of increase of medical care prices is necessary in order to devise appropriate solutions to cope with the problem. An understanding of the community characteristics and resources-medical care costs relationships in the metropolitan areas potentially offers guidance in individual plan and national policy development.^ The research considers 145 factors measuring community milieu (demographic, social, educational, economic, illness level, prices of goods and services other than medical care, hospital supply, physicians resources and techological factors). Through bivariate correlation analysis, the number of variables was reduced to a set of 1 to 4 variables for each cost equation. Two approaches were identified to track inflation in the health care industry. One approach measures costs of production which accounts for price and volume increases. The other approach measures price increases. One general and four specific measures were developed to represent each of the major approaches. The general measure considers the increase on medical care prices as a whole and the specific measures deal with hospital costs and physician's fees. The relationships among changes in community characteristics and resources and health care inflation were analyzed using bivariate correlation and regression analysis methods. It has been concluded that changes in community characteristics and resources are predictive of hospital costs and physician's fees inflation, but are not predictive of increases in medical care prices. These findings provide guidance in the formulation of public policy which could alter the trend of medical care inflation and in the allocation of limited Federal funds.^
Resumo:
This paper tackles the optimization of applications in multi-provider hybrid cloud scenarios from an economic point of view. In these scenarios the great majority of solutions offer the automatic allocation of resources on different cloud providers based on their current prices. However our approach is intended to introduce a novel solution by making maximum use of divide and rule. This paper describes a methodology to create cost aware cloud applications that can be broken down into the three most important components in cloud infrastructures: computation, network and storage. A real videoconference system has been modified in order to evaluate this idea with both theoretical and empirical experiments. This system has become a widely used tool in several national and European projects for e-learning and collaboration purposes.