899 resultados para overhead allocation
Estimates of patient costs related with population morbidity: Can indirect costs affect the results?
Resumo:
A number of health economics works require patient cost estimates as a basic information input.However the accuracy of cost estimates remains in general unspecified. We propose to investigate howthe allocation of indirect costs or overheads can affect the estimation of patient costs in order to allow forimprovements in the analysis of patient costs estimates. Instead of focusing on the costing method, thispaper proposes to highlight changes in variance explained observed when a methodology is chosen. Wecompare three overhead allocation methods for a specific Spanish population adjusted using the ClinicalRisk Groups (CRG), and we obtain different series of full-cost group estimates. As a result, there aresignificant gains in the proportion of the variance explained, depending upon the methodology used.Furthermore, we find that the global amount of variation explained by risk adjustment models dependsmainly on direct costs and is independent of the level of aggregation used in the classification system.
Resumo:
O objetivo deste estudo é identificar e descrever uma amostra dos sistemas de custos bancários existentes e em operação no País, inferindo por este meio o grau de desenvolvimento destes sistemas. O assunto foi tratado através de e utilização pesquisa de campo, dividida em três fases, distintas e diferenciadas. Na primeira etapa foram enviados questionários pelo correio a 54 bancos nacionais. Destinavam-se a formar um quadro geral do estágio de desenvolvimento dos sistemas de custos em instituições financeiras. A primeira fase da pesquisa forneceu resultados modestos, basicamente quantitativos e poucas conclusões firmes. Dessa abordagem genérica, o trabalho evoluiu para as outras fases. A segunda fase consistiu em 5 entrevistas levadas a efeito em bancos com experiências diferenciadas na área de custos. A terceira fase, também sob a forma de entrevistas pessoais, foi efetivada a partir da constatação de diversas questões que haviam ficado em aberto a partir da abordagem dada ao assunto na segunda fase. Estas duas últimas fases particularizaram e melhoraram os resultados obtidos, formando um quadro representativo da situação dos sistemas de custos em instituições financeiras. Este quadro indicou uma grande preferência por sistemas de custeio direto, com alocação de "overhead", em detrimento de sistemas de custeio-padrão. Informações de rentabilidade,particularmente em relação aos clientes, estão entre os objetivos mais citados. A pesquisa evidenciou os problemas enfrentados na operacionalização dos sistemas, basicamente na coleta de dados físicos e nos diversos critérios de aproximação que são utilizados. Observou-se que as soluções que melhoram a qualidade da informação gerada pelo sistema passam necessariamente pela otimização da coleta e atualização dos dados físicos. Observouse, ainda, que todos os sistemas estudados tem restrições de ordem metodológica, apontando-se a ampla discussão dos problemas como alternativa para o seu constante aperfeiçoamento.
Resumo:
The main objective of this Master’s thesis is to develop a cost allocation model for a leading food industry company in Finland. The goal is to develop an allocation method for fixed overhead expenses produced in a specific production unit and create a plausible tracking system for product costs. The second objective is to construct an allocation model and modify the created model to be suited for other units as well. Costs, activities, drivers and appropriate allocation methods are studied. This thesis is started with literature review of existing theory of ABC, inspecting cost information and then conducting interviews with officials to get a general view of the requirements for the model to be constructed. The familiarization of the company started with becoming acquainted with the existing cost accounting methods. The main proposals for a new allocation model were revealed through interviews, which were utilized in setting targets for developing the new allocation method. As a result of this thesis, an Excel-based model is created based on the theoretical and empiric data. The new system is able to handle overhead costs in more detail improving the cost awareness, transparency in cost allocations and enhancing products’ cost structure. The improved cost awareness is received by selecting the best possible cost drivers for this situation. Also the capacity changes are taken into consideration, such as usage of practical or normal capacity instead of theoretical is suggested to apply. Also some recommendations for further development are made about capacity handling and cost collection.
Resumo:
Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example
Resumo:
The optimized allocation of protective devices in strategic points of the circuit improves the quality of the energy supply and the system reliability index. This paper presents a nonlinear integer programming (NLIP) model with binary variables, to deal with the problem of protective device allocation in the main feeder and all branches of an overhead distribution circuit, to improve the reliability index and to provide customers with service of high quality and reliability. The constraints considered in the problem take into account technical and economical limitations, such as coordination problems of serial protective devices, available equipment, the importance of the feeder and the circuit topology. The use of genetic algorithms (GAs) is proposed to solve this problem, using a binary representation that does (1) or does not (0) show allocation of protective devices (reclosers, sectionalizers and fuses) in predefined points of the circuit. Results are presented for a real circuit (134 busses), with the possibility of protective device allocation in 29 points. Also the ability of the algorithm in finding good solutions while improving significantly the indicators of reliability is shown. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a new approach for optimal phasor measurement units placement for fault location on electric power distribution systems using Greedy Randomized Adaptive Search Procedure metaheuristic and Monte Carlo simulation. The optimized placement model herein proposed is a general methodology that can be used to place devices aiming to record the voltage sag magnitudes for any fault location algorithm that uses voltage information measured at a limited set of nodes along the feeder. An overhead, three-phase, three-wire, 13.8 kV, 134-node, real-life feeder model is used to evaluate the algorithm. Tests show that the results of the fault location methodology were improved thanks to the new optimized allocation of the meters pinpointed using this methodology. © 2011 IEEE.
Resumo:
This paper presents an approach for the active transmission losses allocation between the agents of the system. The approach uses the primal and dual variable information of the Optimal Power Flow in the losses allocation strategy. The allocation coefficients are determined via Lagrange multipliers. The paper emphasizes the necessity to consider the operational constraints and parameters of the systems in the problem solution. An example, for a 3-bus system is presented in details, as well as a comparative test with the main allocation methods. Case studies on the IEEE 14-bus systems are carried out to verify the influence of the constraints and parameters of the system in the losses allocation.
Resumo:
This paper presents a new approach to the transmission loss allocation problem in a deregulated system. This approach belongs to the set of incremental methods. It treats all the constraints of the network, i.e. control, state and functional constraints. The approach is based on the perturbation of optimum theorem. From a given optimal operating point obtained by the optimal power flow the loads are perturbed and a new optimal operating point that satisfies the constraints is determined by the sensibility analysis. This solution is used to obtain the allocation coefficients of the losses for the generators and loads of the network. Numerical results show the proposed approach in comparison to other methods obtained with well-known transmission networks, IEEE 14-bus. Other test emphasizes the importance of considering the operational constraints of the network. And finally the approach is applied to an actual Brazilian equivalent network composed of 787 buses, and it is compared with the technique used nowadays by the Brazilian Control Center. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This article presents a tool for the allocation analysis of complex systems of water resources, called AcquaNetXL, developed in the form of spreadsheet in which a model of linear optimization and another nonlinear were incorporated. The AcquaNetXL keeps the concepts and attributes of a decision support system. In other words, it straightens out the communication between the user and the computer, facilitates the understanding and the formulation of the problem, the interpretation of the results and it also gives a support in the process of decision making, turning it into a clear and organized process. The performance of the algorithms used for solving the problems of water allocation was satisfactory especially for the linear model.
Resumo:
As many countries are moving toward water sector reforms, practical issues of how water management institutions can better effect allocation, regulation, and enforcement of water rights have emerged. The problem of nonavailability of water to tailenders on an irrigation system in developing countries, due to unlicensed upstream diversions is well documented. The reliability of access or equivalently the uncertainty associated with water availability at their diversion point becomes a parameter that is likely to influence the application by users for water licenses, as well as their willingness to pay for licensed use. The ability of a water agency to reduce this uncertainty through effective water rights enforcement is related to the fiscal ability of the agency to monitor and enforce licensed use. In this paper, this interplay across the users and the agency is explored, considering the hydraulic structure or sequence of water use and parameters that define the users and the agency`s economics. The potential for free rider behavior by the users, as well as their proposals for licensed use are derived conditional on this setting. The analyses presented are developed in the framework of the theory of ""Law and Economics,`` with user interactions modeled as a game theoretic enterprise. The state of Ceara, Brazil, is used loosely as an example setting, with parameter values for the experiments indexed to be approximately those relevant for current decisions. The potential for using the ideas in participatory decision making is discussed. This paper is an initial attempt to develop a conceptual framework for analyzing such situations but with a focus on the reservoir-canal system water rights enforcement.
Resumo:
The performance optimisation of overhead conductors depends on the systematic investigation of the fretting fatigue mechanisms in the conductor/clamping system. As a consequence, a fretting fatigue rig was designed and a limited range of fatigue tests was carried out at the middle high cycle fatigue regime in order to access an exploratory S-N curve for a Grosbeak conductor, which was mounted on a mono-articulated aluminium clamping system. Subsequent to these preliminary fatigue tests, the components of the conductor/clamping system, such as ACSR conductor, upper and lower clamps, bolt and nuts, were subjected to a failure analysis procedure in order to investigate the metallurgical free variables interfering on the fatigue test results, aiming at the optimisation of the testing reproducibility. The results indicated that the rupture of the planar fracture surfaces observed in the external At strands of the conductor tested under lower bending amplitude (0.9 mm) occurred by fatigue cracking (I mm deep), followed by shear overload. The V-type fracture surfaces observed in some At strands of the conductor tested under higher bending amplitude (1.3 mm) were also produced by fatigue cracking (approximately 400 mu m deep), followed by shear overload. Shear overload fracture (45 degrees fracture surface) was also observed on the remaining At wires of the conductor tested under higher bending amplitude (1.3 mm). Additionally, the upper and lower Al-cast clamps presented microstructure-sensitive cracking, which was folowed by particle detachment and formation of abrasive debris on the clamp/conductor tribo-interface, promoting even further the fretting mechanism. The detrimental formation of abrasive debris might be inhibited by the selection of a more suitable class of as-cast At alloy for the production of clamps. Finally, the bolt/nut system showed intense degradation of the carbon steel nut (fabricated in ferritic-pearlitic carbon steel, featuring machined threads with 190 HV), with intense plastic deformation and loss of material. Proper selection of both the bolt and nut materials and the finishing processing might prevent the loss in the clamping pressure during the fretting testing. It is important to control the specification of these components (clamps, bolt and nuts) prior to the start of large scale fretting fatigue testing of the overhead conductors in order to increase the reproducibility of this assessment. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Lightning-induced overvoltages have a considerable impact on the power quality of overhead distribution and telecommunications systems, and various models have been developed for the computation of the electromagnetic transients caused by indirect strokes. The most adequate has been shown to be the one proposed by Agrawal et al.; the Rusck model can be visualized as a particular case, as both models are equivalent when the lightning channel is perpendicular to the ground plane. In this paper, an extension of the Rusck model that enables the calculation of lightning-induced transients considering flashes to nearby elevated structures and realistic line configurations is tested against data obtained from both natural lightning and scale model experiments. The latter, performed under controlled conditions, can be used also to verify the validity of other coupling models and relevant codes. The so-called Extended Rusck Model, which is shown to be sufficiently accurate, is applied to the analysis of lightning-induced voltages on lines with a shield wire and/or surge arresters. The investigation conducted indicates that the ratio between the peak values of the voltages induced by typical first and subsequent strokes can be either greater or smaller than the unity, depending on the line configuration.
Resumo:
Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.
Resumo:
We examined resource limitations on growth and carbon allocation in a fast-growing, clonal plantation of Eucalyptus grandis x urophylla in Brazil by characterizing responses to annual rainfall, and response to irrigation and fertililization for 2 years. Productivity measures included gross primary production (GPP), total belowground carbon allocation (TBCA), bole growth, and net ecosystem production (NEP). Replicate plots within a single plantation were established at the midpoint of the rotation (end of year 3), with treatments of no additional fertilization or irrigation, heavy fertilization (to remove any nutrient limitation), irrigation (to remove any water limitation), and irrigation plus fertilization. Rainfall was unusually high in the first year (1769mm) of the experiment, and control plots had high rates of GPP (6.64 kg C m(-2) year(-1)), TBCA (2.14 kg C m(-2) year(-1)), and bole growth (1.81 kg C m(-2) year). Irrigation increased each of these rates by 15-17%. The second year of the experiment had average rainfall (1210 mm), and lower rainfall decreased production in control plots by 46% (GPP), 52% (TBCA), and 40% (bole growth). Fertilization treatments had neglible effects. The response to irrigation was much greater in the drier year, with irrigated plots exceeding the production in control plots by 83% (GPP), 239% (TBCA), and 24% (bole growth). Even though the rate of irrigation ensured no water limitation to tree growth, the high rainfall year showed higher production in irrigated plots for both GPP (38% greater than in drier year) and bole growth (23% greater). Varying humidity and supplies of water led to a range in NEP of 0.8-2.7 kg C m(-2) year. This difference between control and irrigated treatments, combined with differences between drier and wetter years, indicated a strong response of these Eucalyptus trees to both water supply and atmospheric humidity during the dry season. The efficiency of converting light energy into fixed carbon ranged from a low of 0.027 mol C to a high of 0.060 mol C per mol of absorbed photosynthetically active radiation (APAR), and the efficiency of bolewood production ranged from 0.78 to 1.98 g wood per MJ of APAR. Irrigation increased the efficiency of wood production per unit of water used from 2.55 kg wood m(-3) in the rainfed plot to 3.51 kg m(-3) in irrigated plots. Detailed information on the response of C budgets to environmental conditions and resource supplies will be necessary for accurate predictions of plantation yields across years and landscapes. (V) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper critically assesses several loss allocation methods based on the type of competition each method promotes. This understanding assists in determining which method will promote more efficient network operations when implemented in deregulated electricity industries. The methods addressed in this paper include the pro rata [1], proportional sharing [2], loss formula [3], incremental [4], and a new method proposed by the authors of this paper, which is loop-based [5]. These methods are tested on a modified Nordic 32-bus network, where different case studies of different operating points are investigated. The varying results obtained for each allocation method at different operating points make it possible to distinguish methods that promote unhealthy competition from those that encourage better system operation.