817 resultados para Cost-Benefit
Resumo:
Atualmente existe uma gama considerável de procedimentos assistenciais de saúde os quais, seja por cultura, ou por exigências legais, só podem ser realizados com aporte tecnológico. Logo reduzir o tempo de parada de um equipamento médico está diretamente associado à viabilização da assistência necessária ao paciente. Em outras palavras, prestar uma manutenção otimizada representa qualidade na prestação de serviço da unidade assistencial de saúde, podendo em alguns casos, até, evitar a morte do paciente. Para isso é muito importante que os defeitos sejam reconhecidos e tratados adequadamente, não se esquecendo da otimização dos recursos financeiros. Com base nesta premissa, este trabalho propõe estabelecer uma relação dos tipos de defeitos apresentados pelos equipamentos com sua ocorrência, utilizando para tal o método de Pareto, fornecendo uma ferramenta de análise de grande utilidade na estruturação de grupos de manutenção corretiva de um serviço de engenharia clínica. Assim foram tomados como amostra quatro tipos de equipamentos que são eletrocardiógrafos, monitores cardíacos, unidades eletrocirúrgicas e incubadoras neonatais; caracterizando a existência de defeitos que podem ser classificados em baixa, média e alta complexidade de resolução, o que reflete na necessidade de ferramentas gerais ou específicas ao reparo e do nível de qualificação da equipe técnica, além do universo de componentes, partes e peças envolvidas com cada etapa de manutenção. Foram qualificados e quantificados defeitos para cada tipo de equipamentos acima, onde se constatou que nos equipamentos médico-hospitalares estudados, a maioria das paradas requisitou intervenções simples, de baixa complexidade. Ou seja, poucos defeitos foram responsáveis por muitas ordens de serviço e que o corpo técnico de manutenção pode esperar e se programar para atuar em cada um deles de acordo com o gráfico de Pareto. Desta forma o serviço de engenharia clínica pode ser direcionado para as necessidades prioritárias do estabelecimento assistencial de saúde, sendo capaz de produzir resultados com melhor custo-benefício.
Resumo:
Para avaliar os benefícios da comunicação rápida ao clínico do diagnóstico de vírus respiratórios, foi analisado a viabilidade econômica de 2 testes, com o tempo de entrega de resultado em 2 horas para teste rápido e 48 horas para Biologia Molecular. As amostras coletadas foram processadas utilizando técnicas convencionais e os testes disponíveis no mercado local. Foram escolhidos dois testes rápidos pelo método de imunocromatografia para quatro parâmetros analíticos: Influenza A, Influenza H1N1, Influenza B e Vírus Sincicial Respiratório (RSV) e em Biologia Molecular um teste de RT-PCR multiplex com 25 patógenos entre vírus e bactérias. O tipo de amostra utilizada foi swab e lavado de nasofaringe. A população escolhida para o estudo foi paciente adulto, em tratamento de câncer, que necessita de uma resposta rápida já que a maioria se encontra com comprometimento do sistema imune por doença ou por tratamento. O estudo foi transversal, realizado entre os anos de 2012 e 2013, para avaliar a viabilidade econômica da introdução de testes de diagnóstico da infecção respiratória aguda de etiologia viral a partir de amostras de nasofaringe em pacientes com câncer atendidos no Centro de Atendimento de Oncologia Intercorrência (CAIO ), do Instituto do Câncer do Estado de São Paulo (ICESP), hospital público que atende exclusivamente Sistema Único de Saúde (SUS) e Hospital A.C. Camargo, que atende tanto a pacientes do SUS como da rede privada. O estudo incluiu 152 pacientes em tratamento para qualquer tipo de câncer, predominantemente do sexo feminino (81 mulheres e 70 homens) com idades entre 18-86 anos. Para participar do estudo o paciente era consultado e o critério para escolha do paciente foi ser portador de câncer, com história de febre (ainda que referida) acompanhada de tosse ou dor de garganta, tosse e sintomas respiratórios agudos, atendidos por protocolo padronizado que inclui avaliação na admissão, seguimento e manejo antimicrobiano. Para a avaliação econômica os pacientes foram classificados de acordo com o estado geral de saúde, se apresentavam bom estado de estado de saúde poderiam receber alta e faziam uso da medicação em casa evitando 5 dias de internação se recebessem algum resultado para Influenza ou RSV, no entanto os pacientes que apresentavam outro vírus, resultado negativo ou o estado geral era ruim permaneciam internados por 7 dias em observação e cuidados com medicação adequada. Foram realizadas análises econômicas em dois âmbitos: o sistema de saúde publico e o privado considerando o fator diminuição de dias de internação. A analise de Custo-benefício foi eficiente no Sistema privado mas inadequada para o SUS assim como, qualquer outra medida monetária já que os valores de reembolso do SUS estão defasados do custo de qualquer internação. A análise de Custo-efetividade que olha para outros fatores além do monetário foi efetiva nos dois sistemas que enfrentam falta de leitos além da condição de saúde do paciente de evitar a ingestão desnecessária de antibióticos, evitar os gastos do acompanhante, perda de dias de trabalho e estudo. Não houve correspondência de resultados dos testes rápidos com o multiplex de Biologia Molecular
Resumo:
Why does the European Union (EU) join international human rights treaties? This paper develops motivational profiles pertaining either to a ‘logic of appropriateness’ or a ‘logic of consequentialism’ in order to answer this question. It compares the EU’s motivations for its recent accession to the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD) with those dominating the EU’s nonaccession to the Council of Europe Convention on preventing and combating violence against women and domestic violence (Istanbul Convention). Based on this cross-case analysis, I argue that the EU’s accession decisions are best viewed as cost-benefit calculations and explained by the strength of opposition and the desire to spread its norms. The EU is only marginally concerned with efforts to construct an ‘appropriate role’, although its accession considerations are positively influenced by (varying degrees) of an internalized commitment to human rights. The paper aims at deepening the understanding of the EU’s motivations in the paradigmatic hard case of accession to international human rights treaties not least to evaluate the EU’s ‘exceptional nature’, facilitate its predictability for stake-holders and contribute to political and ethical debates surrounding future rites of passage as a global actor.
Resumo:
Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014
Resumo:
Food policy is one the most regulated policy fields at the EU level. ‘Unholy alliances’ are collaborative patterns that temporarily bring together antagonistic stakeholders behind a common cause. This paper deals with such ‘transversal’ co-operations between citizens’ groups (NGOs, consumers associations…) and economic stakeholders (food industries, retailers…), focusing on their ambitions and consequences. This paper builds on two case studies that enable a more nuanced view on the perspectives for the development of transversal networks at the EU level. The main findings are that (i) the rationale behind the adoption of collaborative partnerships actually comes from a case-by-case cost/benefit analysis leading to hopes of improved access to institutions; (ii) membership of a collaborative network leads to a learning process closely linked to the network’s performance; and (iii) coalitions can have a better reception — rather than an automatic better access — depending on several factors independent of the stakeholders themselves.
Resumo:
From the Introduction. The present contribution is an attempt to raise awareness between the 'trenches' by juxtaposing the two approaches to subsidiarity. Subsequently, I shall set out why, in economics, subsidiarity is embraced as a key principle in the design and working of the Union and how a functional subsidiarity test can be derived from this thinking. Throughout the paper, a range of illustrations and examples is provided in an attempt to show the practical applicability of a subsidiarity test. This does not mean, of course, that the application of the test can automatically "solve" all debates on whether subsidiarity is (not) violated. What it does mean, however, is that a careful methodology can be a significant help to e.g. national parliaments and the Brussels circuit, in particular, to discourage careless politicisation as much as possible and to render assessments of subsidiarity comparable throughout the Union. The latter virtue should be of interest to national parliaments in cooperating, within just six weeks, about a common stance in the case of a suspected violation of the principle. The structure of the paper is as follows. Section 2 gives a flavour of very different approaches and appreciation of the subsidiarity principle in European law and in the economics of multi-tier government. Section 3 elaborates on the economics of multi-tier government as a special instance of cost / benefit analysis of (de)centralisation in the three public economic functions of any government system. This culminates in a five-steps subsidiarity test and a brief discussion about its proper and improper application. Section 4 applies the test in a non-technical fashion to a range of issues of the "efficiency function" (i.e. allocation and markets) of the EU. After showing that the functional logic of subsidiarity may require liberalisation to be accompanied by various degrees of centralisation, a number of fairly detailed illustrations of how to deal with subsidiarity in the EU is provided. One illustration is about how the subsidiarity logic is misused by protagonists (labour in the internal market). A slightly different but frequently encountered aspect consists in the refusal to recognize that the EU (that is, some form of centralisation) offers a better solution than 25 national ones. A third range of issues, where the functional logic of subsidiarity could be useful, emerges when the boundaries of national competences are shifting due to more intense cross-border flows and developments. Other subsections are devoted to Union public goods and to the question whether the subsidiarity test might trace instances of EU decentralisation: a partial or complete shift of a policy or regulation to Member States. The paper refrains from an analysis of the application of the subsidiarity test to the other two public functions, namely, equity and macro-economic stabilisation.2 Section 5 argues that the use of a well-developed methodology of a functional subsidiarity test would be most useful for the national parliaments and even more so for their cooperation in case of a suspected violation of subsidiarity. Section 6 concludes.
Resumo:
In the EU circuit (especially the European Parliament, the Council and Coreper) as well as in national parliaments of the EU Member States, one observes a powerful tendency to regard 'subsidiarity' as a 'political' issue. Moreover, subsidiarity is frequently seen as a one-way street : powers going 'back to' Member States. Both interpretations are at least partly flawed and less than helpful when looking for practical ways to deal with subsidiarity at both EU and Member states' levels. The present paper shows that subsidiarity as a principle is profoundly 'functional' in nature and, hence, is and must be a two-way principle. A functional subsidiarity test is developed and its application is illustrated for a range of policy issues in the internal market in its widest sense, for equity and for macro-economic stabilisation questions in European integration. Misapplications of 'subsidiarity' are also demonstrated. For a good understanding, subsidiarity being a functional, two-way principle neither means that elected politicians should not have the final (political!) say (for which they are accountable), nor that subsidiarity tests, even if properly conducted, cannot and will not be politicised once the results enter the policy debate. Such politicisation forms a natural run-up to the decision-making by those elected for it. But the quality and reasoning of the test as well as structuring the information in a logical sequence ( in accordance with the current protocol and with the one in the constitutional treaty) is likely to be directly helpful for decisionmakers, confronted with complicated and often specialised proposals. EU debates and decision-making is therefore best served by separating the functional subsidiarity test (prepared by independent professionals) from the final political decision itself. If the test were accepted Union-wide, it would also assist national parliaments in conducting comparable tests in a relatively short period, as the basis for possible joint action (as suggested by the constitutional treaty). The core of the paper explains how the test is formulated and applied. A functional approach to subsidiarity in the framework of European representative democracy seeks to find the optimal assignment of regulatory or policy competences to the various tiers of government. In the final analysis, this is about structures facilitating the highest possible welfare in the Union, in the fundamental sense that preferences and needs are best satisfied. What is required for such an analysis is no less than a systematic cost/benefit framework to assess the (de)merits of (de)centralisation in the EU.
Resumo:
Mutual recognition is one of the most appreciated innovations of the EU. The idea is that one can pursue market integration, indeed "deep' market integration, while respecting 'diversity' amongst the participating countries. Put differently, in pursuing 'free movement' for goods, mutual recognition facilitates free movement by disciplining the nature and scope of 'regulatory barriers', whilst allowing some degree of regulatory discretion for EU Member States. This BEER paper attempts to explain the rationale and logic of mutual recognition in the EU internal goods market, its working in actual practice for about three decades now, culminating in a qualitative cost/benefit analysis and its recent improvement in terms of 'governance' in the so-called New Legislative Framework (first denoted as the 2008 Goods package) thereby ameliorating the benefits/costs ratio. For new (in contrast to existing) national regulation, the intrusive EU procedure to impose mutual recognition is presented as well, with basic data so as to show its critical importance to keep the internal goods market free. All this is complemented by a short summary of the scant economic literature on mutual recognition. Subsequently, the analysis is extended to the internal market for services. This is done in two steps, first by reminding the debate on the origin principle (which goes further than mutual recognition EU-style) and how mutual recognition works under the horizontal services directive. This is followed by a short section on how mutual recognition works in vertical (i.e. sectoral) services markets.
Resumo:
The topic of this study is Washington State vehicle surplus and whether a higher yield on tax payer investment in the purchase, sale and use of state motor vehicles can be achieved. The hypothesis that will be addressed is: Can Washington State create a higher return on investment from tax payer dollars by donating vehicles to automotive training programs? I explore this topic by asking, a) What is the current policy? b) What is the perception of automotive educators on that policy? c) What are the returns currently and could they be improved through policy change? The methodology for testing is by obtaining quantitative data from the state and other entities on vehicle purchases, costs, and responses to surveys by automotive educators in the state. Qualitative data will be explored in context to the policies, and education then incorporating it into a Cost/Benefit analysis on the return on investment to taxpayers.
Resumo:
Remotely sensed data have been used extensively for environmental monitoring and modeling at a number of spatial scales; however, a limited range of satellite imaging systems often. constrained the scales of these analyses. A wider variety of data sets is now available, allowing image data to be selected to match the scale of environmental structure(s) or process(es) being examined. A framework is presented for use by environmental scientists and managers, enabling their spatial data collection needs to be linked to a suitable form of remotely sensed data. A six-step approach is used, combining image spatial analysis and scaling tools, within the context of hierarchy theory. The main steps involved are: (1) identification of information requirements for the monitoring or management problem; (2) development of ideal image dimensions (scene model), (3) exploratory analysis of existing remotely sensed data using scaling techniques, (4) selection and evaluation of suitable remotely sensed data based on the scene model, (5) selection of suitable spatial analytic techniques to meet information requirements, and (6) cost-benefit analysis. Results from a case study show that the framework provided an objective mechanism to identify relevant aspects of the monitoring problem and environmental characteristics for selecting remotely sensed data and analysis techniques.
Resumo:
This study examines the effect of increasing water depth and water velocity upon the surfacing behaviour of the bimodally respiring turtle, Rheodytes leukops. Surfacing frequency was recorded for R. leukops at varying water depths (50, 100, 150 cm) and water velocities (5, 15, 30 cm s(-1)) during independent trials to provide an indirect cost-benefit analysis of aquatic versus pulmonary respiration. With increasing water velocity, R. leukops decreased its surfacing frequency twentyfold, thus suggesting a heightened reliance upon aquatic gas exchange. An elevated reliance upon aquatic respiration, which presumably translates into a decreased air-breathing frequency, may be metabolically more efficient for R. leukops compared to the expenditure (i.e. time and energy) associated with air-breathing within fast-flowing riffle zones. Additionally, R. leukops at higher water velocities preferentially selected low-velocity microhabitats, presumably to avoid the metabolic expenditure associated with high water flow. Alternatively, increasing water depth had no effect upon the surfacing frequency of R. leukops, suggesting little to no change in the respiratory partitioning of the species across treatment settings. Routinely long dives (>90 min) recorded for R. leukops indicate a high reliance upon aquatic O-2 uptake regardless of water depth. Moreover, metabolic and temporal costs attributed to pulmonary gas exchange within a pool-like environment were likely minimal for R. leukops, irrespective of water depth.
Resumo:
This paper considers the economics of conserving a species with mainly non-use value, the endangered mahogany glider. Three serial surveys of Brisbane residents provide data on the knowledge of respondents about the mahogany glider. The results supply information about the attitudes of respondents to the mahogany glider, to its conservation and relevant public policies, and about variations in these factors as the knowledge of participants of the mahogany glider alters. Similarly, data are provided and analysed about the willingness to pay of respondents to conserve the mahogany glider and how it changes. Population viability analysis is applied to estimate the required habitat area for a minimum viable population of the mahogany glider to ensure at least a 95% probability of its survival for 100 years. Places are identified in Queensland where the requisite minimum area of critical habitat can be conserved. Using the survey results as a basis, the likely willingness of groups of Australians to pay for the conservation of the mahogany glider is estimated and consequently their willingness to pay for the minimum required area of its habitat. Methods for estimating the cost of protecting this habitat are outlined. Australia-wide benefits are estimated to exceed the costs. Establishing a national park containing the minimum viable population of the mahogany glider is an appealing management option. This would also be beneficial in conserving other endangered wildlife species and ecosystems. Therefore, additional economic benefits to those estimated on account of the mahogany glider itself can be obtained. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Fundamental principles of precaution are legal maxims that ask for preventive actions, perhaps as contingent interim measures while relevant information about causality and harm remains unavailable, to minimize the societal impact of potentially severe or irreversible outcomes. Such principles do not explain how to make choices or how to identify what is protective when incomplete and inconsistent scientific evidence of causation characterizes the potential hazards. Rather, they entrust lower jurisdictions, such as agencies or authorities, to make current decisions while recognizing that future information can contradict the scientific basis that supported the initial decision. After reviewing and synthesizing national and international legal aspects of precautionary principles, this paper addresses the key question: How can society manage potentially severe, irreversible or serious environmental outcomes when variability, uncertainty, and limited causal knowledge characterize their decision-making? A decision-analytic solution is outlined that focuses on risky decisions and accounts for prior states of information and scientific beliefs that can be updated as subsequent information becomes available. As a practical and established approach to causal reasoning and decision-making under risk, inherent to precautionary decision-making, these (Bayesian) methods help decision-makers and stakeholders because they formally account for probabilistic outcomes, new information, and are consistent and replicable. Rational choice of an action from among various alternatives-defined as a choice that makes preferred consequences more likely-requires accounting for costs, benefits and the change in risks associated with each candidate action. Decisions under any form of the precautionary principle reviewed must account for the contingent nature of scientific information, creating a link to the decision-analytic principle of expected value of information (VOI), to show the relevance of new information, relative to the initial ( and smaller) set of data on which the decision was based. We exemplify this seemingly simple situation using risk management of BSE. As an integral aspect of causal analysis under risk, the methods developed in this paper permit the addition of non-linear, hormetic dose-response models to the current set of regulatory defaults such as the linear, non-threshold models. This increase in the number of defaults is an important improvement because most of the variants of the precautionary principle require cost-benefit balancing. Specifically, increasing the set of causal defaults accounts for beneficial effects at very low doses. We also show and conclude that quantitative risk assessment dominates qualitative risk assessment, supporting the extension of the set of default causal models.
Resumo:
Water quality is a key concern in the current global environment, with the need to promote practices that help to protect water quality, such as riparian zone management, being paramount. The present study used the theory of planned behaviour as a framework for understanding how beliefs influence decisions about riparian zone management. Respondents completed a survey that assessed their behavioural, normative, and control beliefs in relation to intentions to manage riparian zones on their property. The results of the study showed that, overall, landholders with strong intentions to manage their riparian zones differed significantly in terms of their beliefs compared to landholders who had weak intentions to manage their riparian zones. Strong intentions to manage riparian zones were associated with a favourable cost-benefit analysis, greater perceptions of normative support for the practice and lower perceptions of the extent to which barriers would impede management of riparian zones. It was also evident that willingness to comply with the recommendations of salient referents, beliefs about the benefits of riparian zone management and perceptions of the extent to which barriers would impede riparian zone management were most important for determining intentions to manage riparian zones. Implications for policy and extension practice are discussed. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Knowledge maintenance is a major challenge for both knowledge management and the Semantic Web. Operating over the Semantic Web, there will be a network of collaborating agents, each with their own ontologies or knowledge bases. Change in the knowledge state of one agent may need to be propagated across a number of agents and their associated ontologies. The challenge is to decide how to propagate a change of knowledge state. The effects of a change in knowledge state cannot be known in advance, and so an agent cannot know who should be informed unless it adopts a simple ‘tell everyone – everything’ strategy. This situation is highly reminiscent of the classic Frame Problem in AI. We argue that for agent-based technologies to succeed, far greater attention must be given to creating an appropriate model for knowledge update. In a closed system, simple strategies are possible (e.g. ‘sleeping dog’ or ‘cheap test’ or even complete checking). However, in an open system where cause and effect are unpredictable, a coherent cost-benefit based model of agent interaction is essential. Otherwise, the effectiveness of every act of knowledge update/maintenance is brought into question.