926 resultados para transmission cost allocation
Resumo:
Choosing between Light Rail Transit (LRT) and Bus Rapid Transit (BRT) systems is often controversial and not an easy task for transportation planners who are contemplating the upgrade of their public transportation services. These two transit systems provide comparable services for medium-sized cities from the suburban neighborhood to the Central Business District (CBD) and utilize similar right-of-way (ROW) categories. The research is aimed at developing a method to assist transportation planners and decision makers in determining the most feasible system between LRT and BRT. ^ Cost estimation is a major factor when evaluating a transit system. Typically, LRT is more expensive to build and implement than BRT, but has significantly lower Operating and Maintenance (OM) costs than BRT. This dissertation examines the factors impacting capacity and costs, and develops cost models, which are a capacity-based cost estimate for the LRT and BRT systems. Various ROW categories and alignment configurations of the systems are also considered in the developed cost models. Kikuchi's fleet size model (1985) and cost allocation method are used to develop the cost models to estimate the capacity and costs. ^ The comparison between LRT and BRT are complicated due to many possible transportation planning and operation scenarios. In the end, a user-friendly computer interface integrated with the established capacity-based cost models, the LRT and BRT Cost Estimator (LBCostor), was developed by using Microsoft Visual Basic language to facilitate the process and will guide the users throughout the comparison operations. The cost models and the LBCostor can be used to analyze transit volumes, alignments, ROW configurations, number of stops and stations, headway, size of vehicle, and traffic signal timing at the intersections. The planners can make the necessary changes and adjustments depending on their operating practices. ^
Resumo:
The concession agreement is the core feature of BOT projects, with the concession period being the most essential feature in determining the time span of the various rights, obligations and responsibilities of the government and concessionaire. Concession period design is therefore crucial for financial viability and determining the benefit/cost allocation between the host government and the concessionaire. However, while the concession period and project life span are essentially interdependent, most methods to date consider their determination as contiguous events that are determined exogenously. Moreover, these methods seldom consider the, often uncertain, social benefits and costs involved that are critical in defining, pricing and distributing benefits and costs between the various parties and evaluating potentially distributable cash flows. In this paper, we present the results of the first stage of a research project aimed at determining the optimal build-operate-transfer (BOT) project life span and concession period endogenously and interdependently by maximizing the combined benefits of stakeholders. Based on the estimation of the economic and social development involved, a negotiation space of the concession period interval is obtained, with its lower boundary creating the desired financial return for the private investors and its upper boundary ensuring the economic feasibility of the host government as well as the maximized welfare within the project life. The outcome of the new quantitative model is considered as a suitable basis for future field trials prior to implementation. The structure and details of the model are provided in the paper with Hong Kong tunnel project as a case study to demonstrate its detailed application. The basic contributions of the paper to the theory of construction procurement are that the project life span and concession period are determined jointly and the social benefits taken into account in the examination of project financial benefits. In practical terms, the model goes beyond the current practice of linear-process thinking and should enable engineering consultants to provide project information more rationally and accurately to BOT project bidders and increase the government's prospects of successfully entering into a contract with a concessionaire. This is expected to generate more negotiation space for the government and concessionaire in determining the major socioeconomic features of individual BOT contracts when negotiating the concession period. As a result, the use of the model should increase the total benefit to both parties.
Resumo:
With the liberalisation of electricity market it has become very important to determine the participants making use of the transmission network.Transmission line usage computation requires information of generator to load contributions and the path used by various generators to meet loads and losses. In this study relative electrical distance (RED) concept is used to compute reactive power contributions from various sources like generators, switchable volt-amperes reactive(VAR) sources and line charging susceptances that are scattered throughout the network, to meet the system demands. The transmission line charge susceptances contribution to the system reactive flows and its aid extended in reducing the reactive generation at the generator buses are discussed in this paper. Reactive power transmission cost evaluation is carried out in this study. The proposed approach is also compared with other approaches viz.,proportional sharing and modified Y-bus.Detailed case studies with base case and optimised results are carried out on a sample 8-bus system. IEEE 39-bus system and a practical 72-bus system, an equivalent of Indian Southern grid are also considered for illustration and results are discussed.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet and mobile networks, exposes challenges in coping with heterogeneous devices and varying network throughput. Adaptive schemes, such as scalable video coding, are an attractive solution but fare badly in the presence of packet losses. Techniques that use description-based streaming models, such as multiple description coding (MDC), are more suitable for lossy networks, and can mitigate the effects of packet loss by increasing the error resilience of the encoded stream, but with an increased transmission byte cost. In this paper, we present our adaptive scalable streaming technique adaptive layer distribution (ALD). ALD is a novel scalable media delivery technique that optimises the tradeoff between streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data are spread amongst all packets, thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the resiliency of the scalable video. The Subjective testing results illustrate that our techniques and models were able to provide levels of consistent high-quality viewing, with lower transmission cost, relative to MDC, irrespective of clip type. This highlights the benefits of selective packetisation in addition to intuitive encoding and transmission.
Resumo:
The use of distribution networks in the current scenario of high penetration of Distributed Generation (DG) is a problem of great importance. In the competitive environment of electricity markets and smart grids, Demand Response (DR) is also gaining notable impact with several benefits for the whole system. The work presented in this paper comprises a methodology able to define the cost allocation in distribution networks considering large integration of DG and DR resources. The proposed methodology is divided into three phases and it is based on an AC Optimal Power Flow (OPF) including the determination of topological distribution factors, and consequent application of the MW-mile method. The application of the proposed tariffs definition methodology is illustrated in a distribution network with 33 buses, 66 DG units, and 32 consumers with DR capacity.
Resumo:
RESUMO - Introdução: A ausência de um plano de contabilidade analítica para os Cuidados de Saúde Primários é um problema para a realização da contabilidade interna, fundamental para a gestão de qualquer instituição de saúde. Sem linhas orientadoras para a uniformização dos critérios de imputação e distribuição dos custos/proveitos, torna-se complicado obter dados analíticos para que haja um controlo de gestão mais eficaz, que permita a utilização dos recursos de uma forma eficiente e racional, melhorando a qualidade da prestação de cuidados aos utentes. Objectivo: O presente projecto de investigação tem como principal objectivo apurar o custo por utente nos Cuidados de Saúde Primários. Metodologia: Foi construída uma metodologia de apuramento de custos com base no método Time-Driven Activity-Based Costing. O custo foi imputado a cada utente utilizando os seguintes costs drivers: tempo de realização da consulta e a produção realizada para a imputação dos custos com o pessoal médico; produção realizada para a imputação dos outros custos com o pessoal e dos custos indirectos variáveis; número total de utentes inscritos para a imputação dos custos indirectos fixos. Resultados: O custo total apurado foi 2.980.745,10€. O número médio de consultas é de 3,17 consultas por utente inscrito e de 4,72 consultas por utente utilizador. O custo médio por utente é de 195,76€. O custo médio por utente do género feminino é de 232,41€. O custo médio por utente do género masculino é de 154,80€. As rubricas com mais peso no custo total por utente são os medicamentos (40,32%), custo com pessoal médico (22,87%) e MCDT (17,18%). Conclusão: Na implementação de um sistema de apuramentos de custos por utente, é fulcral que existam sistemas de informação eficientes que permitam o registo dos cuidados prestados ao utente pelos vários níveis de prestação de cuidados. É importante também que a gestão não utilize apenas os resultados apurados como uma ferramenta de controlo de custos, devendo ser potenciada a sua utilização para a criação de valor ao utente.
Resumo:
RESUMO - Introdução: A integração vertical de cuidados surge em Portugal em 1999 com a criação da primeira Unidade Local de Saúde (ULS) em Matosinhos. Este modelo de gestão tem como principal objetivo reorganizar o sistema para responder de forma mais custo-efetiva às necessidades atuais. Objetivo: Analisar o impacto da criação das ULS nos custos do internamento hospitalar português. Metodologia: Para apurar o custo médio estimado por episódio de internamento hospitalar utilizou-se a metodologia dos Custos Estimados com base na Contabilidade Analítica. Contudo, não foram imputados custos por diária de internamento por centro de produção, mas apenas por doente saído em determinado hospital. Para efeitos de comparação dos modelos de gestão organizacionais consideraram-se variáveis demográficas e variáveis de produção. Resultados: Da análise global, os hospitais integrados em ULS apresentam um custo médio estimado por episódio de internamento inferior quando comparados com os restantes. Em 2004 os hospitais sem modelo de integração vertical de cuidados apresentam uma diferença de custos de aproximadamente 714,00€. No ano 2009, último ano em análise, esta diferença é mais ténue situando-se nos 232,00€ quando comparados com hospitais integrados em ULS. Discussão e Conclusão: Não existe uma tendência definida no que respeita à diferença de custos quando se comparam os diferentes modelos organizacionais. É importante que em estudos futuros se alargue a amostra ao total de prestadores e se aprofundem os fatores que influênciam os custos de internamento. A compreensão dos indicadores sociodemográficos, demora média, e produção realizada, numa ótica de custo efetividade e qualidade, permitirá resultados com menor grau de viés.
Resumo:
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système.
Resumo:
Air Cargo Economics - Outline - Air cargo participants - Air cargo pricing, rates & yields - Cargo related costs - Freighter aircraft operating costs - Methods of cost allocation - Pax/combi vs freighter services - Lufthansa’s cargo strategy - Quick change aircraft - Aircraft wet leasing - Conclusions
Resumo:
Este trabalho objetivou investigar, através de um estudo exploratório e descritivo, a relação existente entre os procedimentos, técnicas e métodos adotados na alocação de custos indiretos em cinco indústrias de calçados da região do Vale dos Sinos (RS) e aqueles apontados na literatura de custos. Também, buscou-se obter depoimentos dos principais usuá rios da informação de custos das empresas sobre a utilidade da alocação de custos indiretos no processo de gerência. Desta forma, procedeu-se a uma revisão de fundamentos teóricos da Contabilidade de Custos, sob a ótica da Contabilidade Gerencial, a partir do qual delineou-se um Plano de Referência que orientou as etapas da pesquisa. A seguir, apresenta-se a metodologia utilizada, justificando- se as razões de seu emprego, assim como as limitações inerentes a esse tipo de estudo exploratório. Com base nos dados obtidos através de entrevistas, mediante a aplicação de um questionário e análise de documentos, fez-se uma descrição dos mecanismos utilizados pelas empresas es tudadas, na prática da alocação de custos indiretos, bem como re latou-se os depoimentos dos gerentes entrevistado~ sobre a utilidade das alocações de custos indiretos, em relação à necessidade de informações gerenciais como: avaliação de estoques, determinação de resultados, fixação de preços, controle e avaliação de desempenho, planejamento e tomada de decisão.Os resultados obtidos permitiram a análi·se comparativa das práticas adotadas pelas empresas com o referencialtéÕ rico previamente levantado. Também, oportunizou a que se chegasse a importantes conclusões, além de fornecerem subsídios para algumas recomendações e sugestões para futuras pesquisas.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this study, we aimed to estimate the effect that environmental, demographic, and socioeconomic factors have on dengue mortality in Latin America and the Caribbean. To that end, we conducted an observational ecological study, analyzing data collected between 1995 and 2009. Dengue mortality rates were highest in the Caribbean (Spanish-speaking and non-Spanish-speaking). Multivariate analysis through Poisson regression revealed that the following factors were independently associated with dengue mortality: time since identification of endemicity (adjusted rate ratio [aRR] = 3.2 [for each 10 years]); annual rainfall (aRR = 1.5 [for each 10(3) L/m(2)]); population density (aRR = 2.1 and 3.2 for 20-120 inhabitants/km(2) and > 120 inhabitants/km(2), respectively); Human Development Index > 0.83 (aRR = 0.4); and circulation of the dengue 2 serotype (aRR = 1.7). These results highlight the important role that environmental, demographic, socioeconomic, and biological factors have played in increasing the severity of dengue in recent decades.
Resumo:
The EU began railway reform in earnest around the turn of the century. Two ‘railway packages’ have meanwhile been adopted amounting to a series of directives and a third package has been proposed. A range of complementary initiatives has been undertaken or is underway. This BEEP Briefing inspects the main economic aspects of EU rail reform. After highlighting the dramatic loss of market share of rail since the 1960s, the case for reform is argued to rest on three arguments: the need for greater competitiveness of rail, promoting the (market driven) diversion of road haulage to rail as a step towards sustainable mobility in Europe, and an end to the disproportional claims on public budgets of Member States. The core of the paper deals respectively with market failures in rail and in the internal market for rail services; the complex economic issues underlying vertical separation (unbundling) and pricing options; and the methods, potential and problems of introducing competition in rail freight and in passenger services. Market failures in the rail sector are several (natural monopoly, economies of density, safety and asymmetries of information), exacerbated by no less than 7 technical and legal barriers precluding the practical operation of an internal rail market. The EU choice to opt for vertical unbundling (with benefits similar in nature as in other network industries e.g. preventing opaque cross-subsidisation and greater cost revelation) risks the emergence of considerable coordination costs. The adoption of marginal cost pricing is problematic on economic grounds (drawbacks include arbitrary cost allocation rules in the presence of large economies of scope and relatively large common costs; a non-optimal incentive system, holding back the growth of freight services; possibly anti-competitive effects of two-part tariffs). Without further detailed harmonisation, it may also lead to many different systems in Member States, causing even greater distortions. Insofar as freight could develop into a competitive market, a combination of Ramsey pricing (given the incentive for service providers to keep market share) and price ceilings based on stand-alone costs might be superior in terms of competition, market growth and regulatory oversight. The incipient cooperative approach for path coordination and allocation is welcome but likely to be seriously insufficient. The arguments to introduce competition, notably in freight, are valuable and many e.g. optimal cross-border services, quality differentiation as well as general quality improvement, larger scale for cost recovery and a decrease of rent seeking. Nevertheless, it is not correct to argue for the introduction of competition in rail tout court. It depends on the size of the market and on removing a host of barriers; it requires careful PSO definition and costing; also, coordination failures ought to be pre-empted. On the other hand, reform and competition cannot and should not be assessed in a static perspective. Conduct and cost structures will change with reform. Infrastructure and investment in technology are known to generate enormous potential for cost savings, especially when coupled with the EU interoperability programme. All this dynamism may well help to induce entry and further enlarge the (net) welfare gains from EU railway reform. The paper ends with a few pointers for the way forward in EU rail reform.