114 resultados para policy outcomes
em Université de Lausanne, Switzerland
Resumo:
Using meta-analytic methods on a sample of 74 studies, we explore the links between CPA and public policy outcomes, and between CPA and firm outcomes. We find that CPA has at best a weak effect and that it appears to be better at maintaining public policy than changing them.
Resumo:
This dissertation focuses on the practice of regulatory governance, throughout the study of the functioning of formally independent regulatory agencies (IRAs), with special attention to their de facto independence. The research goals are grounded on a "neo-positivist" (or "reconstructed positivist") position (Hawkesworth 1992; Radaelli 2000b; Sabatier 2000). This perspective starts from the ontological assumption that even if subjective perceptions are constitutive elements of political phenomena, a real world exists beyond any social construction and can, however imperfectly, become the object of scientific inquiry. Epistemologically, it follows that hypothetical-deductive theories with explanatory aims can be tested by employing a proper methodology and set of analytical techniques. It is thus possible to make scientific inferences and general conclusions to a certain extent, according to a Bayesian conception of knowledge, in order to update the prior scientific beliefs in the truth of the related hypotheses (Howson 1998), while acknowledging the fact that the conditions of truth are at least partially subjective and historically determined (Foucault 1988; Kuhn 1970). At the same time, a sceptical position is adopted towards the supposed disjunction between facts and values and the possibility of discovering abstract universal laws in social science. It has been observed that the current version of capitalism corresponds to the golden age of regulation, and that since the 1980s no government activity in OECD countries has grown faster than regulatory functions (Jacobs 1999). Following an apparent paradox, the ongoing dynamics of liberalisation, privatisation, decartelisation, internationalisation, and regional integration hardly led to the crumbling of the state, but instead promoted a wave of regulatory growth in the face of new risks and new opportunities (Vogel 1996). Accordingly, a new order of regulatory capitalism is rising, implying a new division of labour between state and society and entailing the expansion and intensification of regulation (Levi-Faur 2005). The previous order, relying on public ownership and public intervention and/or on sectoral self-regulation by private actors, is being replaced by a more formalised, expert-based, open, and independently regulated model of governance. Independent regulation agencies (IRAs), that is, formally independent administrative agencies with regulatory powers that benefit from public authority delegated from political decision makers, represent the main institutional feature of regulatory governance (Gilardi 2008). IRAs constitute a relatively new technology of regulation in western Europe, at least for certain domains, but they are increasingly widespread across countries and sectors. For instance, independent regulators have been set up for regulating very diverse issues, such as general competition, banking and finance, telecommunications, civil aviation, railway services, food safety, the pharmaceutical industry, electricity, environmental protection, and personal data privacy. Two attributes of IRAs deserve a special mention. On the one hand, they are formally separated from democratic institutions and elected politicians, thus raising normative and empirical concerns about their accountability and legitimacy. On the other hand, some hard questions about their role as political actors are still unaddressed, though, together with regulatory competencies, IRAs often accumulate executive, (quasi-)legislative, and adjudicatory functions, as well as about their performance.
Resumo:
Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.
Resumo:
The present study investigates the short- and long-term outcomes of a computer-assisted cognitive remediation (CACR) program in adolescents with psychosis or at high risk. 32 adolescents participated in a blinded 8-week randomized controlled trial of CACR treatment compared to computer games (CG). Clinical and neuropsychological evaluations were undertaken at baseline, at the end of the program and at 6-month. At the end of the program (n = 28), results indicated that visuospatial abilities (Repeatable Battery for the Assessment of Neuropsychological Status, RBANS; P = .005) improved signifi cantly more in the CACR group compared to the CG group. Furthermore, other cognitive functions (RBANS), psychotic symptoms (Positive and Negative Symptom Scale) and psychosocial functioning (Social and Occupational Functioning Assessment Scale) improved signifi cantly, but at similar rates, in the two groups. At long term (n = 22), cognitive abilities did not demonstrated any amelioration in the control group while, in the CACR group, signifi cant long-term improvements in inhibition (Stroop; P = .040) and reasoning (Block Design Test; P = .005) were observed. In addition, symptom severity (Clinical Global Improvement) decreased signifi cantly in the control group (P = .046) and marginally in the CACR group (P = .088). To sum up, CACR can be successfully administered in this population. CACR proved to be effective over and above CG for the most intensively trained cognitive ability. Finally, on the long-term, enhanced reasoning and inhibition abilities, which are necessary to execute higher-order goals or to adapt behavior to the ever-changing environment, were observed in adolescents benefi ting from a CACR.
Resumo:
Over thirty years ago, Leamer (1983) - among many others - expressed doubts about the quality and usefulness of empirical analyses for the economic profession by stating that "hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else's data analyses seriously" (p.37). Improvements in data quality, more robust estimation methods and the evolution of better research designs seem to make that assertion no longer justifiable (see Angrist and Pischke (2010) for a recent response to Leamer's essay). The economic profes- sion and policy makers alike often rely on empirical evidence as a means to investigate policy relevant questions. The approach of using scientifically rigorous and systematic evidence to identify policies and programs that are capable of improving policy-relevant outcomes is known under the increasingly popular notion of evidence-based policy. Evidence-based economic policy often relies on randomized or quasi-natural experiments in order to identify causal effects of policies. These can require relatively strong assumptions or raise concerns of external validity. In the context of this thesis, potential concerns are for example endogeneity of policy reforms with respect to the business cycle in the first chapter, the trade-off between precision and bias in the regression-discontinuity setting in chapter 2 or non-representativeness of the sample due to self-selection in chapter 3. While the identification strategies are very useful to gain insights into the causal effects of specific policy questions, transforming the evidence into concrete policy conclusions can be challenging. Policy develop- ment should therefore rely on the systematic evidence of a whole body of research on a specific policy question rather than on a single analysis. In this sense, this thesis cannot and should not be viewed as a comprehensive analysis of specific policy issues but rather as a first step towards a better understanding of certain aspects of a policy question. The thesis applies new and innovative identification strategies to policy-relevant and topical questions in the fields of labor economics and behavioral environmental economics. Each chapter relies on a different identification strategy. In the first chapter, we employ a difference- in-differences approach to exploit the quasi-experimental change in the entitlement of the max- imum unemployment benefit duration to identify the medium-run effects of reduced benefit durations on post-unemployment outcomes. Shortening benefit duration carries a double- dividend: It generates fiscal benefits without deteriorating the quality of job-matches. On the contrary, shortened benefit durations improve medium-run earnings and employment possibly through containing the negative effects of skill depreciation or stigmatization. While the first chapter provides only indirect evidence on the underlying behavioral channels, in the second chapter I develop a novel approach that allows to learn about the relative impor- tance of the two key margins of job search - reservation wage choice and search effort. In the framework of a standard non-stationary job search model, I show how the exit rate from un- employment can be decomposed in a way that is informative on reservation wage movements over the unemployment spell. The empirical analysis relies on a sharp discontinuity in unem- ployment benefit entitlement, which can be exploited in a regression-discontinuity approach to identify the effects of extended benefit durations on unemployment and survivor functions. I find evidence that calls for an important role of reservation wage choices for job search be- havior. This can have direct implications for the optimal design of unemployment insurance policies. The third chapter - while thematically detached from the other chapters - addresses one of the major policy challenges of the 21st century: climate change and resource consumption. Many governments have recently put energy efficiency on top of their agendas. While pricing instru- ments aimed at regulating the energy demand have often been found to be short-lived and difficult to enforce politically, the focus of energy conservation programs has shifted towards behavioral approaches - such as provision of information or social norm feedback. The third chapter describes a randomized controlled field experiment in which we discuss the effective- ness of different types of feedback on residential electricity consumption. We find that detailed and real-time feedback caused persistent electricity reductions on the order of 3 to 5 % of daily electricity consumption. Also social norm information can generate substantial electricity sav- ings when designed appropriately. The findings suggest that behavioral approaches constitute effective and relatively cheap way of improving residential energy-efficiency.
Resumo:
BACKGROUND: Twelve-step mutual-help groups (TMGs) are among the most available forms of support for homeless individuals with alcohol problems. Qualitative research, however, has suggested that this population often has negative perceptions of these groups, which has been shown to be associated with low TMG attendance. It is important to understand this population's perceptions of TMGs and their association with alcohol outcomes to provide more appropriate and better tailored programming for this multiply affected population. The aims of this cross-sectional study were to (a) qualitatively examine perception of TMGs in this population and (b) quantitatively evaluate its association with motivation, treatment attendance and alcohol outcomes. METHODS: Participants (N=62) were chronically homeless individuals with alcohol problems who received single-site Housing First within a larger evaluation study. Perceptions of TMGs were captured using an open-ended item. Quantitative outcome variables were created from assessments of motivation, treatment attendance and alcohol outcomes. RESULTS: Findings indicated that perceptions of TMGs were primarily negative followed by positive and neutral perceptions, respectively. There were significant, positive associations between perceptions of TMGs and motivation and treatment attendance, whereas no association was found for alcohol outcomes. CONCLUSIONS: Although some individuals view TMGs positively, alternative forms of help are needed to engage the majority of chronically homeless individuals with alcohol problems.
Resumo:
Electricity is a strategic service in modern societies. Thus, it is extremely important for governments to be able to guarantee an affordable and reliable supply, which depends to a great extent on an adequate expansion of the generation and transmission capacities. Cross- border integration of electricity markets creates new challenges for the regulators, since the evolution of the market is now influenced by the characteristics and policies of neighbouring countries. There is still no agreement on why and how regions should integrate their electricity markets. The aim of this thesis is to improve the understanding of integrated electricity markets and how their behaviour depends on the prevailing characteristics of the national markets and the policies implemented in each country. We developed a simulation model to analyse under what circumstances integration is desirable. This model is used to study three cases of interconnection between two countries. Several policies regarding interconnection expansion and operation, combined with different generation capacity adequacy mechanisms, are evaluated. The thesis is composed of three papers. The first paper presents a detailed description of the model and an analysis of the case of Colombia and Ecuador. It shows that market coupling can bring important benefits, but the relative size of the countries can lead to import dependency issues in the smaller country. The second paper compares the case of Colombia and Ecuador with the case of Great Britain and France. These countries are significantly different in terms of electricity sources, hydro- storage capacity, complementarity and demand growth. We show that complementarity is essential in order to obtain benefits from integration, while higher demand growth and hydro- storage capacity can lead to counterintuitive outcomes, thus complicating policy design. In the third paper, an extended version of the model presented in the first paper is used to analyse the case of Finland and its interconnection with Russia. Different trading arrangements are considered. We conclude that unless interconnection capacity is expanded, the current trading arrangement, where a single trader owns the transmission rights and limits the flow during peak hours, is beneficial for Finland. In case of interconnection expansion, market coupling would be preferable. We also show that the costs of maintaining a strategic reserve in Finland are justified in order to limit import dependency, while still reaping the benefits of interconnection. In general, we conclude that electricity market integration can bring benefits if the right policies are implemented. However, a large interconnection capacity is only desirable if the countries exhibit significant complementarity and trust each other. The outcomes of policies aimed at guaranteeing security of supply at a national level can be quite counterintuitive due to the interactions between neighbouring countries and their effects on interconnection and generation investments. Thus, it is important for regulators to understand these interactions and coordinate their decisions in order to take advantage of the interconnection without putting security of supply at risk. But it must be taken into account that even when integration brings benefits to the region, some market participants lose and might try to hinder the integration process. -- Dans les sociétés modernes, l'électricité est un service stratégique. Il est donc extrêmement important pour les gouvernements de pouvoir garantir la sécurité d'approvisionnement à des prix abordables. Ceci dépend en grande mesure d'une expansion adéquate des capacités de génération et de transmission. L'intégration des marchés électriques pose des nouveaux défis pour les régulateurs, puisque l'évolution du marché est maintenant influencée par les caractéristiques et les politiques des pays voisins. Il n'est pas encore claire pourquoi ni comment les marches électriques devraient s'intégrer. L'objectif de cette thèse est d'améliorer la compréhension des marchés intégrés d'électricité et de leur comportement en fonction des caractéristiques et politiques de chaque pays. Un modèle de simulation est proposé pour étudier les conditions dans lesquelles l'intégration est désirable. Ce modèle est utilisé pour étudier trois cas d'interconnexion entre deux pays. Plusieurs politiques concernant l'expansion et l'opération de l'interconnexion, combinées avec différents mécanismes de rémunération de la capacité, sont évalués. Cette thèse est compose de trois articles. Le premier présente une description détaillée du modèle et une analyse du cas de la Colombie et de l'Equateur. Il montre que le couplage de marchés peut amener des bénéfices importants ; cependant, la différence de taille entre pays peut créer des soucis de dépendance aux importations pour le pays le plus petit. Le second papier compare le cas de la Colombie et l'Equateur avec le cas de la Grande Bretagne et de la France. Ces pays sont très différents en termes de ressources, taille des réservoirs d'accumulation pour l'hydro, complémentarité et croissance de la demande. Nos résultats montrent que la complémentarité joue un rôle essentiel dans l'obtention des bénéfices potentiels de l'intégration, alors qu'un taux élevé de croissance de la demande, ainsi qu'une grande capacité de stockage, mènent à des résultats contre-intuitifs, ce qui complique les décisions des régulateurs. Dans le troisième article, une extension du modèle présenté dans le premier article est utilisée pour analyser le cas de la Finlande et de la Russie. Différentes règles pour les échanges internationaux d'électricité sont considérées. Nos résultats indiquent qu'à un faible niveau d'interconnexion, la situation actuelle, où un marchand unique possède les droits de transmission et limite le flux pendant les heures de pointe, est bénéfique pour la Finlande. Cependant, en cas d'expansion de la capacité d'interconnexion, «market coupling» est préférable. préférable. Dans tous les cas, la Finlande a intérêt à garder une réserve stratégique, car même si cette politique entraine des coûts, elle lui permet de profiter des avantages de l'intégration tout en limitant ca dépendance envers les importations. En général, nous concluons que si les politiques adéquates sont implémentées, l'intégration des marchés électriques peut amener des bénéfices. Cependant, une grande capacité d'interconnexion n'est désirable que si les pays ont une complémentarité importante et il existe une confiance mutuelle. Les résultats des politiques qui cherchent à préserver la sécurité d'approvisionnement au niveau national peuvent être très contre-intuitifs, étant données les interactions entre les pays voisins et leurs effets sur les investissements en génération et en interconnexion. Il est donc très important pour les régulateurs de comprendre ces interactions et de coordonner décisions à fin de pouvoir profiter de l'interconnexion sans mettre en danger la sécurité d'approvisionnement. Mais il faut être conscients que même quand l'intégration amène de bénéfices pour la région, certains participants au marché sont perdants et pourraient essayer de bloquer le processus d'intégration.
Resumo:
ISSUES: There have been reviews on the association between density of alcohol outlets and harm including studies published up to December 2008. Since then the number of publications has increased dramatically. The study reviews the more recent studies with regard to their utility to inform policy. APPROACH: A systematic review found more than 160 relevant studies (published between January 2009 and October 2014). The review focused on: (i) outlet density and assaultive or intimate partner violence; (ii) studies including individual level data; or (iii) 'natural experiments'. KEY FINDINGS: Despite overall evidence for an association between density and harm, there is little evidence on causal direction (i.e. whether demand leads to more supply or increased availability increases alcohol use and harm). When outlet types (e.g. bars, supermarkets) are analysed separately, studies are too methodologically diverse and partly contradictory to permit firm conclusions besides those pertaining to high outlet densities in areas such as entertainment districts. Outlet density commonly had little effect on individual-level alcohol use, and the few 'natural experiments' on restricting densities showed little or no effects. IMPLICATIONS AND CONCLUSIONS: Although outlet densities are likely to be positively related to alcohol use and harm, few policy recommendations can be given as effects vary across study areas, outlet types and outlet cluster size. Future studies should examine in detail outlet types, compare different outcomes associated with different strengths of association with alcohol, analyse non-linear effects and compare different methodologies. Purely aggregate-level studies examining total outlet density only should be abandoned. [Gmel G, Holmes J, Studer J. Are alcohol outlet densities strongly associated with alcohol-related outcomes? A critical review of recent evidence. Drug Alcohol Rev 2015].
Resumo:
BACKGROUND: Lipid-lowering therapy is costly but effective at reducing coronary heart disease (CHD) risk. OBJECTIVE: To assess the cost-effectiveness and public health impact of Adult Treatment Panel III (ATP III) guidelines and compare with a range of risk- and age-based alternative strategies. DESIGN: The CHD Policy Model, a Markov-type cost-effectiveness model. DATA SOURCES: National surveys (1999 to 2004), vital statistics (2000), the Framingham Heart Study (1948 to 2000), other published data, and a direct survey of statin costs (2008). TARGET POPULATION: U.S. population age 35 to 85 years. Time Horizon: 2010 to 2040. PERSPECTIVE: Health care system. INTERVENTION: Lowering of low-density lipoprotein cholesterol with HMG-CoA reductase inhibitors (statins). OUTCOME MEASURE: Incremental cost-effectiveness. RESULTS OF BASE-CASE ANALYSIS: Full adherence to ATP III primary prevention guidelines would require starting (9.7 million) or intensifying (1.4 million) statin therapy for 11.1 million adults and would prevent 20,000 myocardial infarctions and 10,000 CHD deaths per year at an annual net cost of $3.6 billion ($42,000/QALY) if low-intensity statins cost $2.11 per pill. The ATP III guidelines would be preferred over alternative strategies if society is willing to pay $50,000/QALY and statins cost $1.54 to $2.21 per pill. At higher statin costs, ATP III is not cost-effective; at lower costs, more liberal statin-prescribing strategies would be preferred; and at costs less than $0.10 per pill, treating all persons with low-density lipoprotein cholesterol levels greater than 3.4 mmol/L (>130 mg/dL) would yield net cost savings. RESULTS OF SENSITIVITY ANALYSIS: Results are sensitive to the assumptions that LDL cholesterol becomes less important as a risk factor with increasing age and that little disutility results from taking a pill every day. LIMITATION: Randomized trial evidence for statin effectiveness is not available for all subgroups. CONCLUSION: The ATP III guidelines are relatively cost-effective and would have a large public health impact if implemented fully in the United States. Alternate strategies may be preferred, however, depending on the cost of statins and how much society is willing to pay for better health outcomes. FUNDING: Flight Attendants' Medical Research Institute and the Swanson Family Fund. The Framingham Heart Study and Framingham Offspring Study are conducted and supported by the National Heart, Lung, and Blood Institute.
Resumo:
STUDY DESIGN: Prospective, controlled, observational outcome study using clinical, radiographic, and patient/physician-based questionnaire data, with patient outcomes at 12 months follow-up. OBJECTIVE: To validate appropriateness criteria for low back surgery. SUMMARY OF BACKGROUND DATA: Most surgical treatment failures are attributed to poor patient selection, but no widely accepted consensus exists on detailed indications for appropriate surgery. METHODS: Appropriateness criteria for low back surgery have been developed by a multispecialty panel using the RAND appropriateness method. Based on panel criteria, a prospective study compared outcomes of patients appropriately and inappropriately treated at a single institution with 12 months follow-up assessment. Included were patients with low back pain and/or sciatica referred to the neurosurgical department. Information about symptoms, neurologic signs, the health-related quality of life (SF-36), disability status (Roland-Morris), and pain intensity (VAS) was assessed at baseline, at 6 months, and at 12 months follow-up. The appropriateness criteria were administered prospectively to each clinical situation and outside of the clinical setting, with the surgeon and patients blinded to the results of the panel decision. The patients were further stratified into 2 groups: appropriate treatment group (ATG) and inappropriate treatment group (ITG). RESULTS: Overall, 398 patients completed all forms at 12 months. Treatment was considered appropriate for 365 participants and inappropriate for 33 participants. The mean improvement in the SF-36 physical component score at 12 months was significantly higher in the ATG (mean: 12.3 points) than in the ITG (mean: 6.8 points) (P = 0.01), as well as the mean improvement in the SF-36 mental component score (ATG mean: 5.0 points; ITG mean: -0.5 points) (P = 0.02). Improvement was also significantly higher in the ATG for the mean VAS back pain (ATG mean: 2.3 points; ITG mean: 0.8 points; P = 0.02) and Roland-Morris disability score (ATG mean: 7.7 points; ITG mean: 4.2 points; P = 0.004). The ATG also had a higher improvement in mean VAS for sciatica (4.0 points) than the ITG (2.8 points), but the difference was not significant (P = 0.08). The SF-36 General Health score declined in both groups after 12 months, however, the decline was worse in the ITG (mean decline: 8.2 points) than in the ATG (mean decline: 1.2 points) (P = 0.04). Overall, in comparison to ITG patients, ATG patients had significantly higher improvement at 12 months, both statistically and clinically. CONCLUSION: In comparison to previously reported literature, our study is the first to assess the utility of appropriateness criteria for low back surgery at 1-year follow-up with multiple outcome dimensions. Our results confirm the hypothesis that application of appropriateness criteria can significantly improve patient outcomes.
Resumo:
Job protection and cash benefits are key elements of parental leave (PL) systems. We study how these two policy instruments affect return-to-work and medium-run labour market outcomes of mothers of newborn children. Analysing a series of major PL policy changes in Austria, we find that longer cash benefits lead to a significant delay in return-to-work, particularly so in the period that is job-protected. Prolonged parental leave absence induced by these policy changes does not appear to hurt mothers' labour market outcomes in the medium run. We build a non-stationary model of job search after childbirth to isolate the role of the two policy instruments. The model matches return-to-work and return to same employer profiles under the various factual policy configurations. Counterfactual policy simulations indicate that a system that combines cash with protection dominates other systems in generating time for care immediately after birth while maintaining mothers' medium-run labour market attachment.
Impact of low-level viremia on clinical and virological outcomes in treated HIV-1-infected patients.
Resumo:
BACKGROUND: The goal of antiretroviral therapy (ART) is to reduce HIV-related morbidity and mortality by suppressing HIV replication. The prognostic value of persistent low-level viremia (LLV), particularly for clinical outcomes, is unknown. OBJECTIVE: Assess the association of different levels of LLV with virological failure, AIDS event, and death among HIV-infected patients receiving combination ART. METHODS: We analyzed data from 18 cohorts in Europe and North America, contributing to the ART Cohort Collaboration. Eligible patients achieved viral load below 50 copies/ml within 3-9 months after ART initiation. LLV50-199 was defined as two consecutive viral loads between 50 and 199 copies/ml and LLV200-499 as two consecutive viral loads between 50 and 499 copies/ml, with at least one between 200 and 499 copies/ml. We used Cox models to estimate the association of LLV with virological failure (two consecutive viral loads at least 500 copies/ml or one viral load at least 500 copies/ml, followed by a modification of ART) and AIDS event/death. RESULTS: Among 17 902 patients, 624 (3.5%) experienced LLV50-199 and 482 (2.7%) LLV200-499. Median follow-up was 2.3 and 3.1 years for virological and clinical outcomes, respectively. There were 1903 virological failure, 532 AIDS events and 480 deaths. LLV200-499 was strongly associated with virological failure [adjusted hazard ratio (aHR) 3.97, 95% confidence interval (CI) 3.05-5.17]. LLV50-199 was weakly associated with virological failure (aHR 1.38, 95% CI 0.96-2.00). LLV50-199 and LLV200-499 were not associated with AIDS event/death (aHR 1.19, 95% CI 0.78-1.82; and aHR 1.11, 95% CI 0.72-1.71, respectively). CONCLUSION: LLV200-499 was strongly associated with virological failure, but not with AIDS event/death. Our results support the US guidelines, which define virological failure as a confirmed viral load above 200 copies/ml.