958 resultados para Cost allocation
Resumo:
This paper analyzes a class of common-component allocation rules, termed no-holdback (NHB) rules, in continuous-review assemble-to-order (ATO) systems with positive lead times. The inventory of each component is replenished following an independent base-stock policy. In contrast to the usually assumed first-come-first-served (FCFS) component allocation rule in the literature, an NHB rule allocates a component to a product demand only if it will yield immediate fulfillment of that demand. We identify metrics as well as cost and product structures under which NHB rules outperform all other component allocation rules. For systems with certain product structures, we obtain key performance expressions and compare them to those under FCFS. For general product structures, we present performance bounds and approximations. Finally, we discuss the applicability of these results to more general ATO systems. © 2010 INFORMS.
Resumo:
We develop and apply a valuation methodology to calculate the cost of sustainability capital, and, eventually, sustainable value creation of companies. Sustainable development posits that decisions must take into account all forms of capital rather than just economic capital. We develop a methodology that allows calculation of the costs that are associated with the use of different forms of capital. Our methodology borrows the idea from financial economics that the return on capital has to cover the cost of capital. Capital costs are determined as opportunity costs, that is, the forgone returns that would have been created by alternative investments. We apply and extend the logic of opportunity costs to the valuation not only of economic capital but also of other forms of capital. This allows (a) integrated analysis of use of different forms of capital based on a value-based aggregation of different forms of capital, (b) determination of the opportunity cost of a bundle of different forms of capital used in a company, called cost of sustainability capital, (c) calculation of sustainability efficiency of companies, and (d) calculation of sustainable value creation, that is, the value above the cost of sustainability capital. By expanding the well-established logic of the valuation of economic capital in financial markets to cover other forms of capital, we provide a methodology that allows determination of the most efficient allocation of sustainability capital for sustainable value creation in companies. We demonstrate the practicability of the methodology by the valuation of the sustainability performance of British Petroleum (BP).
Resumo:
Background: Increasing emphasis is being placed on the economics of health care service delivery - including home-based palliative care. Aim: This paper analyzes resource utilization and costs of a shared-care demonstration project in rural Ontario (Canada) from the public health care system's perspective. Design: To provide enhanced end-of-life care, the shared-care approach ensured exchange of expertise and knowledge and coordination of services in line with the understood goals of care. Resource utilization and costs were tracked over the 15 month study period from January 2005 to March 2006. Results: Of the 95 study participants (average age 71 years), 83 had a cancer diagnosis (87%); the non-cancer diagnoses (12 patients, 13%) included mainly advanced heart diseases and COPD. Community Care Access Centre and Enhanced Palliative Care Team-based homemaking and specialized nursing services were the most frequented offerings, followed by equipment/transportation services and palliative care consults for pain and symptom management. Total costs for all patient-related services (in 2007 CAN) were 1,625,658.07 - or 17,112.19 per patient/117.95 per patient day. Conclusion: While higher than expenditures previously reported for a cancer-only population in an urban Ontario setting, the costs were still within the parameters of the US Medicare Hospice Benefits, on a par with the per diem funding assigned for long-term care homes and lower than both average alternate level of care and hospital costs within the Province of Ontario. The study results may assist service planners in the appropriate allocation of resources and service packaging to meet the complex needs of palliative care populations. © 2012 The Author(s).
Resumo:
Abstract—Power capping is an essential function for efficient power budgeting and cost management on modern server systems. Contemporary server processors operate under power caps by using dynamic voltage and frequency scaling (DVFS). However, these processors are often deployed in non-uniform memory
access (NUMA) architectures, where thread allocation between cores may significantly affect performance and power consumption. This paper proposes a method which maximizes performance under power caps on NUMA systems by dynamically optimizing two knobs: DVFS and thread allocation. The method selects the optimal combination of the two knobs with models based on artificial neural network (ANN) that captures the nonlinear effect of thread allocation on performance. We implement
the proposed method as a runtime system and evaluate it with twelve multithreaded benchmarks on a real AMD Opteron based NUMA system. The evaluation results show that our method outperforms a naive technique optimizing only DVFS by up to
67.1%, under a power cap.
Resumo:
Tolerance allocation is an important step in the design process. It is necessary to produce high quality components cost-effectively. However, the process of allocating tolerances can be time consuming and difficult, especially for complex models. This work demonstrates a novel CAD based approach, where the sensitivities of product dimensions to changes in the values of the feature parameters in the CAD model are computed. These are used to automatically establish the assembly response function for the product. This information has been used to automatically allocate tolerances to individual part dimensions to achieve specified tolerances on the assembly dimensions, even for tolerance allocation in more than one direction simultaneously. It is also shown how pre-existing constraints on some of the part dimensions can be represented and how situations can be identified where the required tolerance allocation is not achievable. A methodology is also presented that uses the same information to model a component with different amounts of dimensional variation to simulate the effects of tolerance stack-up. © 2014 Springer-Verlag France.
Resumo:
This study proposes an approach to optimally allocate multiple types of flexible AC transmission system (FACTS) devices in market-based power systems with wind generation. The main objective is to maximise profit by minimising device investment cost, and the system's operating cost considering both normal conditions and possible contingencies. The proposed method accurately evaluates the long-term costs and benefits gained by FACTS devices (FDs) installation to solve a large-scale optimisation problem. The objective implies maximising social welfare as well as minimising compensations paid for generation re-scheduling and load shedding. Many technical operation constraints and uncertainties are included in problem formulation. The overall problem is solved using both particle swarm optimisations for attaining optimal FDs allocation as main problem and optimal power flow as sub-optimisation problem. The effectiveness of the proposed approach is demonstrated on modified IEEE 14-bus test system and IEEE 118-bus test system.
Resumo:
Na última década tem-se assistido a um crescimento exponencial das redes de comunicações sem fios, nomeadamente no que se refere a taxa de penetração do serviço prestado e na implementação de novas infra-estruturas em todo o globo. É ponto assente neste momento que esta tendência irá não só continuar como se fortalecer devido à convergência que é esperada entre as redes móveis sem fio e a disponibilização de serviços de banda larga para a rede Internet fixa, numa evolução para um paradigma de uma arquitectura integrada e baseada em serviços e aplicações IP. Por este motivo, as comunicações móveis sem fios irão ter um papel fundamental no desenvolvimento da sociedade de informação a médio e longo prazos. A estratégia seguida no projecto e implementação das redes móveis celulares da actual geração (2G e 3G) foi a da estratificação da sua arquitectura protocolar numa estrutura modular em camadas estanques, onde cada camada do modelo é responsável pela implementação de um conjunto de funcionalidades. Neste modelo a comunicação dá-se apenas entre camadas adjacentes através de primitivas de comunicação pré-estabelecidas. Este modelo de arquitectura resulta numa mais fácil implementação e introdução de novas funcionalidades na rede. Entretanto, o facto das camadas inferiores do modelo protocolar não utilizarem informação disponibilizada pelas camadas superiores, e vice-versa acarreta uma degradação no desempenho do sistema. Este paradigma é particularmente importante quando sistemas de antenas múltiplas são implementados (sistemas MIMO). Sistemas de antenas múltiplas introduzem um grau adicional de liberdade no que respeita a atribuição de recursos rádio: o domínio espacial. Contrariamente a atribuição de recursos no domínio do tempo e da frequência, no domínio espacial os recursos rádio mapeados no domínio espacial não podem ser assumidos como sendo completamente ortogonais, devido a interferência resultante do facto de vários terminais transmitirem no mesmo canal e/ou slots temporais mas em feixes espaciais diferentes. Sendo assim, a disponibilidade de informação relativa ao estado dos recursos rádio às camadas superiores do modelo protocolar é de fundamental importância na satisfação dos critérios de qualidade de serviço exigidos. Uma forma eficiente de gestão dos recursos rádio exige a implementação de algoritmos de agendamento de pacotes de baixo grau de complexidade, que definem os níveis de prioridade no acesso a esses recursos por base dos utilizadores com base na informação disponibilizada quer pelas camadas inferiores quer pelas camadas superiores do modelo. Este novo paradigma de comunicação, designado por cross-layer resulta na maximização da capacidade de transporte de dados por parte do canal rádio móvel, bem como a satisfação dos requisitos de qualidade de serviço derivados a partir da camada de aplicação do modelo. Na sua elaboração, procurou-se que o standard IEEE 802.16e, conhecido por Mobile WiMAX respeitasse as especificações associadas aos sistemas móveis celulares de quarta geração. A arquitectura escalonável, o baixo custo de implementação e as elevadas taxas de transmissão de dados resultam num processo de multiplexagem de dados e valores baixos no atraso decorrente da transmissão de pacotes, os quais são atributos fundamentais para a disponibilização de serviços de banda larga. Da mesma forma a comunicação orientada à comutação de pacotes, inenente na camada de acesso ao meio, é totalmente compatível com as exigências em termos da qualidade de serviço dessas aplicações. Sendo assim, o Mobile WiMAX parece satisfazer os requisitos exigentes das redes móveis de quarta geração. Nesta tese procede-se à investigação, projecto e implementação de algoritmos de encaminhamento de pacotes tendo em vista a eficiente gestão do conjunto de recursos rádio nos domínios do tempo, frequência e espacial das redes móveis celulares, tendo como caso prático as redes móveis celulares suportadas no standard IEEE802.16e. Os algoritmos propostos combinam métricas provenientes da camada física bem como os requisitos de qualidade de serviço das camadas superiores, de acordo com a arquitectura de redes baseadas no paradigma do cross-layer. O desempenho desses algoritmos é analisado a partir de simulações efectuadas por um simulador de sistema, numa plataforma que implementa as camadas física e de acesso ao meio do standard IEEE802.16e.
Resumo:
Hub location problem is an NP-hard problem that frequently arises in the design of transportation and distribution systems, postal delivery networks, and airline passenger flow. This work focuses on the Single Allocation Hub Location Problem (SAHLP). Genetic Algorithms (GAs) for the capacitated and uncapacitated variants of the SAHLP based on new chromosome representations and crossover operators are explored. The GAs is tested on two well-known sets of real-world problems with up to 200 nodes. The obtained results are very promising. For most of the test problems the GA obtains improved or best-known solutions and the computational time remains low. The proposed GAs can easily be extended to other variants of location problems arising in network design planning in transportation systems.
Resumo:
We study fairness in economies with one private good and one partially excludable nonrival good. A social ordering function determines for each profile of preferences an ordering of all conceivable allocations. We propose the following Free Lunch Aversion condition: if the private good contributions of two agents consuming the same quantity of the nonrival good have opposite signs, reducing that gap improves social welfare. This condition, combined with the more standard requirements of Unanimous Indifference and Responsiveness, delivers a form of welfare egalitarianism in which an agent's welfare at an allocation is measured by the quantity of the nonrival good that, consumed at no cost, would leave her indifferent to the bundle she is assigned.
Resumo:
We survey recent axiomatic results in the theory of cost-sharing. In this litterature, a method computes the individual cost shares assigned to the users of a facility for any profile of demands and any monotonic cost function. We discuss two theories taking radically different views of the asymmetries of the cost function. In the full responsibility theory, each agent is accountable for the part of the costs that can be unambiguously separated and attributed to her own demand. In the partial responsibility theory, the asymmetries of the cost function have no bearing on individual cost shares, only the differences in demand levels matter. We describe several invariance and monotonicity properties that reflect both normative and strategic concerns. We uncover a number of logical trade-offs between our axioms, and derive axiomatic characterizations of a handful of intuitive methods: in the full responsibility approach, the Shapley-Shubik, Aumann-Shapley, and subsidyfree serial methods, and in the partial responsibility approach, the cross-subsidizing serial method and the family of quasi-proportional methods.
Resumo:
We study the problem of provision and cost-sharing of a public good in large economies where exclusion, complete or partial, is possible. We search for incentive-constrained efficient allocation rules that display fairness properties. Population monotonicity says that an increase in population should not be detrimental to anyone. Demand monotonicity states that an increase in the demand for the public good (in the sense of a first-order stochastic dominance shift in the distribution of preferences) should not be detrimental to any agent whose preferences remain unchanged. Under suitable domain restrictions, there exists a unique incentive-constrained efficient and demand-monotonic allocation rule: the so-called serial rule. In the binary public good case, the serial rule is also the only incentive-constrained efficient and population-monotonic rule.
Resumo:
Objectif : Évaluer la « lourdeur » de la prise en charge clinique des personnes vivant avec le VIH/SIDA (PVVIH) afin d’ajuster l’allocation des ressources en GMF. Méthodologie : Analyse comparative entre le GMF de la Clinique médicale l’Actuel, les GMF montréalais et de l’ensemble du Québec, en identifiant les différences dans les profils de consommation de soins pour les années civiles 2006 à 2008 et les coûts d’utilisation des services pour l’année 2005. Résultats : En 2008, 78% de la clientèle inscrite au GMF de la Clinique médicale l’Actuel est vulnérable comparativement à 28% pour les autres GMF montréalais, une tendance observée pour l’ensemble du Québec. Le nombre moyen de visites par individu inscrit et vulnérable est de 7,57 au GMF l’Actuel alors que la moyenne montréalaise est de 3,37 et celle du Québec de 3,47. Enfin, le coût moyen des visites médicales au GMF l’Actuel en 2005 est de 203,93 $ comparativement à des coûts variant entre 132,14 et 149,53 $ pour les unités de comparaison. Conclusion : L’intensité de l’utilisation des ressources au GMF de la Clinique médicale l’Actuel (nombre d’individus vulnérables, nombre de visites et coûts) suggère que la prise en charge clinique des personnes vivant avec le VIH/SIDA est beaucoup plus lourde qu’un citoyen tout venant ou même de la majorité des autres catégories de vulnérabilité. Afin d’offrir un traitement juste et équitable aux GMF, l’inscription devrait être ajustée afin de tenir compte de la « lourdeur » de cette clientèle et valoriser la prise en charge des personnes qui présentent des tableaux cliniques complexes.
Resumo:
This paper focuses on one of the methods for bandwidth allocation in an ATM network: the convolution approach. The convolution approach permits an accurate study of the system load in statistical terms by accumulated calculations, since probabilistic results of the bandwidth allocation can be obtained. Nevertheless, the convolution approach has a high cost in terms of calculation and storage requirements. This aspect makes real-time calculations difficult, so many authors do not consider this approach. With the aim of reducing the cost we propose to use the multinomial distribution function: the enhanced convolution approach (ECA). This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements and makes a simple deconvolution process possible. The ECA is used in connection acceptance control, and some results are presented
Resumo:
La coordinació i assignació de tasques en entorns distribuïts ha estat un punt important de la recerca en els últims anys i aquests temes són el cor dels sistemes multi-agent. Els agents en aquests sistemes necessiten cooperar i considerar els altres agents en les seves accions i decisions. A més a més, els agents han de coordinar-se ells mateixos per complir tasques complexes que necessiten més d'un agent per ser complerta. Aquestes tasques poden ser tan complexes que els agents poden no saber la ubicació de les tasques o el temps que resta abans de que les tasques quedin obsoletes. Els agents poden necessitar utilitzar la comunicació amb l'objectiu de conèixer la tasca en l'entorn, en cas contrari, poden perdre molt de temps per trobar la tasca dins de l'escenari. De forma similar, el procés de presa de decisions distribuït pot ser encara més complexa si l'entorn és dinàmic, amb incertesa i en temps real. En aquesta dissertació, considerem entorns amb sistemes multi-agent amb restriccions i cooperatius (dinàmics, amb incertesa i en temps real). En aquest sentit es proposen dues aproximacions que permeten la coordinació dels agents. La primera és un mecanisme semi-centralitzat basat en tècniques de subhastes combinatòries i la idea principal es minimitzar el cost de les tasques assignades des de l'agent central cap als equips d'agents. Aquest algoritme té en compte les preferències dels agents sobre les tasques. Aquestes preferències estan incloses en el bid enviat per l'agent. La segona és un aproximació d'scheduling totalment descentralitzat. Això permet als agents assignar les seves tasques tenint en compte les preferències temporals sobre les tasques dels agents. En aquest cas, el rendiment del sistema no només depèn de la maximització o del criteri d'optimització, sinó que també depèn de la capacitat dels agents per adaptar les seves assignacions eficientment. Addicionalment, en un entorn dinàmic, els errors d'execució poden succeir a qualsevol pla degut a la incertesa i error de accions individuals. A més, una part indispensable d'un sistema de planificació és la capacitat de re-planificar. Aquesta dissertació també proveeix una aproximació amb re-planificació amb l'objectiu de permetre als agent re-coordinar els seus plans quan els problemes en l'entorn no permeti la execució del pla. Totes aquestes aproximacions s'han portat a terme per permetre als agents assignar i coordinar de forma eficient totes les tasques complexes en un entorn multi-agent cooperatiu, dinàmic i amb incertesa. Totes aquestes aproximacions han demostrat la seva eficiència en experiments duts a terme en l'entorn de simulació RoboCup Rescue.
Resumo:
Background: Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods: In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-eff ectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings: 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months’ follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0∙58, 95% CI 0∙38–0∙89); a β blocker if they had asthma (0∙73, 0∙58–0∙91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0∙51, 0∙34–0∙78). PINCER has a 95% probability of being cost eff ective if the decision-maker’s ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation: The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding: Patient Safety Research Portfolio, Department of Health, England.