938 resultados para Policy evaluation
Resumo:
A methodological framework for conducting a systematic, mostly qualitative, meta-synthesis of community-based rehabilitation (CBR) project evaluation reports is described. Developed in the course of an international pilot study, the framework proposes a systematic review process in phases which are strongly collaborative, methodologically rigorous and detailed. Through this suggested process, valuable descriptive data about CBR practice, strategies and outcomes may be synthesized. It is anticipated that future application of this methodology will contribute to an improved evidence base for CBR, which will facilitate the development of more appropriate policy and practice guidelines for disability service delivery in developing countries. The methodology will also have potential applications in areas beyond CBR, which are similarly. evidence poor' (lacking empirical research) but 'data rich' (with plentiful descriptive and evaluative reports).
Resumo:
The present PhD thesis develops and applies an evaluative methodology suited to the evaluation of policy and governance in complex policy areas. While extensive literatures exist on the topic of policy evaluation, governance evaluation has received less attention. At the level of governance, policymakers confront choices between different policy tools and governance arrangements in their attempts to solve policy problems, including variants of hierarchy, networks and markets. There is a need for theoretically-informed empirical research to inform decision-making at this level. To that end, the PhD develops an approach to evaluation by combining postpositivist policy analysis with heterodox political economy. Postpositivist policy analysis recognises that policy problems are often contested, that choices between policy options can involve significant trade-offs and that knowledge of policy options is itself dispersed and fragmented. Similarly, heterodox economics combines a concept of incommensurable values with an appreciation of the strengths and weaknesses of different institutional arrangements to realise them. A central concept of the field is coordination, which orientates policy analysis to the interactions of stakeholders in policy processes. The challenge of governance is to select the appropriate policy tools and arrangements which facilitate coordination. Via a postpositivist exploration of stakeholder ‘frames’, it is possible to ascertain whether coordination is occurring and to identify problems if it is not. Evaluative claims of governance can be made where arrangements can be shown to frustrate the realisation of shared values and objectives. The research makes a contribution to knowledge in a number of ways a) a distinctive evaluative approach that could be applied to other areas of health and public policy b) greater appreciation of the strengths and weaknesses of different forms of evidence in public policy and in particular health policy and c) concrete policy proposals for the governance and organisation of diabetes services, with implications for the NHS more broadly.
Resumo:
Background: There has been a proliferation of quality use of medicines activities in Australia since the 1990s. However, knowledge of the nature and extent of these activities was lacking. A mechanism was required to map the activities to enable their coordination. Aims: To develop a geographical mapping facility as an evaluative tool to assist the planning and implementation of Australia's policy on the quality use of medicines. Methods: A web-based database incorporating geographical mapping software was developed. Quality use of medicines projects implemented across the country was identified from project listings funded by the Quality Use of Medicines Evaluation Program, the National Health and Medical Research Council, Mental Health Strategy, Rural Health Support, Education and Training Program, the Healthy Seniors Initiative, the General Practice Evaluation Program and the Drug Utilisation Evaluation Network. In addition, projects were identified through direct mail to persons working in the field. Results: The Quality Use of Medicines Mapping Project (QUMMP) was developed, providing a Web-based database that can be continuously updated. This database showed the distribution of quality use of medicines activities by: (i) geographical region, (ii) project type, (iii) target group, (iv) stakeholder involvement, (v) funding body and (vi) evaluation method. At September 2001, the database included 901 projects. Sixty-two per cent of projects had been conducted in Australian capital cities, where approximately 63% of the population reside. Distribution of projects varied between States. In Western Australia and Queensland, 36 and 73 projects had been conducted, respectively, representing approximately two projects per 100 000 people. By comparison, in South Australia and Tasmania approximately seven projects per 100 000 people were recorded, with six per 100 000 people in Victoria and three per 100 000 people in New South Wales. Rural and remote areas of the country had more limited project activity. Conclusions: The mapping of projects by geographical location enabled easy identification of high and low activity areas. Analysis of the types of projects undertaken in each region enabled identification of target groups that had not been involved or services that had not yet been developed. This served as a powerful tool for policy planning and implementation and will be used to support the continued implementation of Australia's policy on the quality use of medicines.
Resumo:
When one wishes to implement public policies, there is a previous need of comparing different actions and valuating and evaluating them to assess their social attractiveness. Recently the concept of well-being has been proposed as a multidimensional proxy for measuring societal prosperity and progress; a key research topic is then on how we can measure and evaluate this plurality of dimensions for policy decisions. This paper defends the thesis articulated in the following points: 1. Different metrics are linked to different objectives and values. To use only one measurement unit (on the grounds of the so-called commensurability principle) for incorporating a plurality of dimensions, objectives and values, implies reductionism necessarily. 2. Point 1) can be proven as a matter of formal logic by drawing on the work of Geach about moral philosophy. This theoretical demonstration is an original contribution of this article. Here the distinction between predicative and attributive adjectives is formalised and definitions are provided. Predicative adjectives are further distinguished into absolute and relative ones. The new concepts of set commensurability and rod commensurability are introduced too. 3. The existence of a plurality of social actors, with interest in the policy being assessed, causes that social decisions involve multiple types of values, of which economic efficiency is only one. Therefore it is misleading to make social decisions based only on that one value. 4. Weak comparability of values, which is grounded on incommensurability, is proved to be the main methodological foundation of policy evaluation in the framework of well-being economics. Incommensurability does not imply incomparability; on the contrary incommensurability is the only rational way to compare societal options under a plurality of policy objectives. 5. Weak comparability can be implemented by using multi-criteria evaluation, which is a formal framework for applied consequentialism under incommensurability. Social Multi-Criteria Evaluation, in particular, allows considering both technical and social incommensurabilities simultaneously.
Resumo:
The purpose of this paper is to examine the relation between government measures, volunteer participation, climate variables and forest fires. A number of studies have related forest fires to causes of ignition, to fire history in one area, to the type of vegetation and weathercharacteristics or to community institutions, but there is little research on the relation between fire production and government prevention and extinction measures from a policy evaluation perspective.An observational approach is first applied to select forest fires in the north east of Spain. Taking a selection of fires with a certain size, a multiple regression analysis is conducted to find significant relations between policy instruments under the control of the government and the number of hectares burn in each case, controlling at the same time the effect of weather conditions and other context variables. The paper brings evidence on the effects of simultaneity and the relevance of recurring to army soldiers in specific days with extraordinary high simultaneity. The analysis also brings light on the effectiveness of twopreventive policies and of helicopters for extinction tasks.
Resumo:
This report synthesizes the findings of 11 country reports on policy learning in labour market and social policies that were conducted as part of WP5 of the INSPIRES project, which is funded by the 7th Framework Program of the EU-Commission. Notably, this report puts forward objectives of policy learning, discusses tools, processes and institutions of policy learning and presents the impacts of various tools and structures of the policy learning infrastructure for the actual policy learning process. The report defines three objectives of policy learning: evaluation and assessment of policy effectiveness, vision building and planning, and consensus building. In the 11 countries under consideration, the tools and processes of the policy learning, infrastructure can be classified into three broad groups: public bodies, expert councils, and parties, interest groups and the private sector. Finally, we develop four recommendations for policy learning: Firstly, learning processes should keep the balance between centralisation and plurality. Secondly, learning processes should be kept stable beyond the usual political business cycles. Thirdly, policy learning tools and infrastructures should be sufficiently independent from political influence or bias. Fourth, Policy learning tools and infrastructures should balance out mere effectiveness, evaluation and vision building.
Resumo:
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.
Resumo:
One objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations can be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are known as partially observable Markov decision processes (POMDPs). While the environment's dynamics are assumed to obey certain rules, the agent does not know them and must learn. In this dissertation we focus on the agent's adaptation as captured by the reinforcement learning framework. This means learning a policy---a mapping of observations into actions---based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. The set of policies is constrained by the architecture of the agent's controller. POMDPs require a controller to have a memory. We investigate controllers with memory, including controllers with external memory, finite state controllers and distributed controllers for multi-agent systems. For these various controllers we work out the details of the algorithms which learn by ascending the gradient of expected cumulative reinforcement. Building on statistical learning theory and experiment design theory, a policy evaluation algorithm is developed for the case of experience re-use. We address the question of sufficient experience for uniform convergence of policy evaluation and obtain sample complexity bounds for various estimators. Finally, we demonstrate the performance of the proposed algorithms on several domains, the most complex of which is simulated adaptive packet routing in a telecommunication network.
Resumo:
In order to evaluate the future potential benefits of emission regulation on regional air quality, while taking into account the effects of climate change, off-line air quality projection simulations are driven using weather forcing taken from regional climate models. These regional models are themselves driven by simulations carried out using global climate models (GCM) and economical scenarios. Uncertainties and biases in climate models introduce an additional “climate modeling” source of uncertainty that is to be added to all other types of uncertainties in air quality modeling for policy evaluation. In this article we evaluate the changes in air quality-related weather variables induced by replacing reanalyses-forced by GCM-forced regional climate simulations. As an example we use GCM simulations carried out in the framework of the ERA-interim programme and of the CMIP5 project using the Institut Pierre-Simon Laplace climate model (IPSLcm), driving regional simulations performed in the framework of the EURO-CORDEX programme. In summer, we found compensating deficiencies acting on photochemistry: an overestimation by GCM-driven weather due to a positive bias in short-wave radiation, a negative bias in wind speed, too many stagnant episodes, and a negative temperature bias. In winter, air quality is mostly driven by dispersion, and we could not identify significant differences in either wind or planetary boundary layer height statistics between GCM-driven and reanalyses-driven regional simulations. However, precipitation appears largely overestimated in GCM-driven simulations, which could significantly affect the simulation of aerosol concentrations. The identification of these biases will help interpreting results of future air quality simulations using these data. Despite these, we conclude that the identified differences should not lead to major difficulties in using GCM-driven regional climate simulations for air quality projections.
Resumo:
We investigate the issue of whether there was a stable money demand function for Japan in 1990's using both aggregate and disaggregate time series data. The aggregate data appears to support the contention that there was no stable money demand function. The disaggregate data shows that there was a stable money demand function. Neither was there any indication of the presence of liquidity trapo Possible sources of discrepancy are explored and the diametrically opposite results between the aggregate and disaggregate analysis are attributed to the neglected heterogeneity among micro units. We also conduct simulation analysis to show that when heterogeneity among micro units is present. The prediction of aggregate outcomes, using aggregate data is less accurate than the prediction based on micro equations. Moreover. policy evaluation based on aggregate data can be grossly misleading.
Resumo:
Logo após à crise financeira de 2007-08 o Federal Reserve interveio para tentar controlar a recessão. No entanto, ele não apenas baixou os juros, como também adotou políticas não-convencionais, incluindo o empréstimo direto para empresas em mercados de crédito de alto nível. Estas novas medidas foram controversas e alguns opositores protestaram porque elas estariam ajudando disproporcionalmente aquelas pessoas ligadas ao sistema financeiro que já eram ricas. Nós utilizamos um modelo DSGE para a análise de políticas monetária não convencional e introduzimos dois tipos distintos de agentes, capitalistas e trabalhadores, para investigar o seu impacto distributivo. Nós encontramos que a política de crédito to Fed foi bem sucedida no mercado de trabalho, o que ajuda mais os trabalhadores, e introduziu um novo competidor no mercado bancário, o governo, o que prejudica mais os capitalistas. Logo, nós encontramos que a política de crédito diminuiu a desigualdade nos EUA.
Resumo:
A substantial need exists to reduce costs and develop more nutritionally adequate diets for established as well as emerging aquaculture species in the North Central Region (NCR). The study evaluated a diet for juvenile northern bluegill (Lepomis macrochirus) that is significantly less costly than currently available diets for sunfish, while yielding a growth rate that is at least equal to an industry standard sunfish diet. Such a diet formulation is now available to the NCR as the result of a recently funded North Central Regional Aquaculture Center (NCRAC) project.
Resumo:
Mode of access: Internet.