771 resultados para Vanhanen, Janne
Resumo:
INTRODUCTION Patients who are lost to follow-up (LTFU) while on antiretroviral therapy (ART) pose challenges to the long-term success of ART programs. We describe the extent to which patients considered LTFU are misclassified as true disengagement from care when they are still alive on ART and explain reasons for ART discontinuation using our active tracing program to further improve ART retention programs and policies. METHODS We identified adult ART patients who missed clinic appointment by more than 3 weeks between January 2006 and December 2010, assuming that such patients would miss their doses of antiretroviral drugs. Patients considered LTFU who consented during ART registration were traced by phone or home visits; true ART status after tracing was documented. Reasons for ART discontinuation were also recorded for those who stopped ART. RESULTS Of the 4,560 suspected LTFU cases, 1,384 (30%) could not be traced. Of the 3,176 successfully traced patients, 952 (30%) were dead and 2,224 (70%) were alive, of which 2,183 (99.5%) started ART according to phone-based self-reports or physical verification during in-person interviews. Of those who started ART, 957 (44%) stopped ART and 1,226 (56%) reported still taking ART at the time of interview by sourcing drugs from another clinic, using alternative ART sources or making brief ART interruptions. Among 940 cases with reasons for ART discontinuations, failure to remember (17%), too weak/sick (12%), travel (46%), and lack of transport to the clinic (16%) were frequently cited; reasons differed by gender. CONCLUSION The LTFU category comprises sizeable proportions of patients still taking ART that may potentially bias retention estimates and misdirect resources at the clinic and national levels if not properly accounted for. Clinics should consider further decentralization efforts, increasing drug allocations for frequent travels, and improving communication on patient transfers between clinics to increase retention and adherence.
Resumo:
Loss to follow-up (LTFU) is a common problem in many epidemiological studies. In antiretroviral treatment (ART) programs for patients with human immunodeficiency virus (HIV), mortality estimates can be biased if the LTFU mechanism is non-ignorable, that is, mortality differs between lost and retained patients. In this setting, routine procedures for handling missing data may lead to biased estimates. To appropriately deal with non-ignorable LTFU, explicit modeling of the missing data mechanism is needed. This can be based on additional outcome ascertainment for a sample of patients LTFU, for example, through linkage to national registries or through survey-based methods. In this paper, we demonstrate how this additional information can be used to construct estimators based on inverse probability weights (IPW) or multiple imputation. We use simulations to contrast the performance of the proposed estimators with methods widely used in HIV cohort research for dealing with missing data. The practical implications of our approach are illustrated using South African ART data, which are partially linkable to South African national vital registration data. Our results demonstrate that while IPWs and proper imputation procedures can be easily constructed from additional outcome ascertainment to obtain valid overall estimates, neglecting non-ignorable LTFU can result in substantial bias. We believe the proposed estimators are readily applicable to a growing number of studies where LTFU is appreciable, but additional outcome data are available through linkage or surveys of patients LTFU. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
OBJECTIVES: Treatment as prevention depends on retaining HIV-infected patients in care. We investigated the effect on HIV transmission of bringing patients lost to follow up (LTFU) back into care. DESIGN: Mathematical model. METHODS: Stochastic mathematical model of cohorts of 1000 HIV-infected patients on antiretroviral therapy (ART), based on data from two clinics in Lilongwe, Malawi. We calculated cohort viral load (CVL; sum of individual mean viral loads each year) and used a mathematical relationship between viral load and transmission probability to estimate the number of new HIV infections. We simulated four scenarios: 'no LTFU' (all patients stay in care); 'no tracing' (patients LTFU are not traced); 'immediate tracing' (after missed clinic appointment); and, 'delayed tracing' (after six months). RESULTS: About 440 of 1000 patients were LTFU over five years. CVL (million copies/ml per 1000 patients) were 3.7 (95% prediction interval [PrI] 2.9-4.9) for no LTFU, 8.6 (95% PrI 7.3-10.0) for no tracing, 7.7 (95% PrI 6.2-9.1) for immediate, and 8.0 (95% PrI 6.7-9.5) for delayed tracing. Comparing no LTFU with no tracing the number of new infections increased from 33 (95% PrI 29-38) to 54 (95% PrI 47-60) per 1000 patients. Immediate tracing prevented 3.6 (95% PrI -3.3-12.8) and delayed tracing 2.5 (95% PrI -5.8-11.1) new infections per 1000. Immediate tracing was more efficient than delayed tracing: 116 and to 142 tracing efforts, respectively, were needed to prevent one new infection. CONCLUSION: Tracing of patients LTFU enhances the preventive effect of ART, but the number of transmissions prevented is small.
Resumo:
BACKGROUND In adults it is well documented that there are substantial losses to the programme between HIV testing and start of antiretroviral therapy (ART). The magnitude and reasons for loss to follow-up and death between HIV diagnosis and start of ART in children are not well defined. METHODS We searched the PubMed and EMBASE databases for studies on children followed between HIV diagnosis and start of ART in low-income settings. We examined the proportion of children with a CD4 cell count/percentage after after being diagnosed with HIV infection, the number of treatment-eligible children starting ART and predictors of loss to programme. Data were extracted in duplicate. RESULTS Eight studies from sub-Saharan Africa and two studies from Asia with a total of 10,741 children were included. Median age ranged from 2.2 to 6.5 years. Between 78.0 and 97.0% of HIV-infected children subsequently had a CD4 cell count/percentage measured, 63.2 to 90.7% of children with an eligibility assessment met the eligibility criteria for the particular setting and time and 39.5 to 99.4% of the eligible children started ART. Three studies reported an association between low CD4 count/percentage and ART initiation while no association was reported for gender. Only two studies reported on pre-ART mortality and found rates of 13 and 6 per 100 person-years. CONCLUSION Most children who presented for HIV care met eligibility criteria for ART. There is an urgent need for strategies to improve the access to and retention to care of HIV-infected children in resource-limited settings.
Resumo:
OBJECTIVES Mortality in patients starting antiretroviral therapy (ART) is higher in Malawi and Zambia than in South Africa. We examined whether different monitoring of ART (viral load [VL] in South Africa and CD4 count in Malawi and Zambia) could explain this mortality difference. DESIGN Mathematical modelling study based on data from ART programmes. METHODS We used a stochastic simulation model to study the effect of VL monitoring on mortality over 5 years. In baseline scenario A all parameters were identical between strategies except for more timely and complete detection of treatment failure with VL monitoring. Additional scenarios introduced delays in switching to second-line ART (scenario B) or higher virologic failure rates (due to worse adherence) when monitoring was based on CD4 counts only (scenario C). Results are presented as relative risks (RR) with 95% prediction intervals and percent of observed mortality difference explained. RESULTS RRs comparing VL with CD4 cell count monitoring were 0.94 (0.74-1.03) in scenario A, 0.94 (0.77-1.02) with delayed switching (scenario B) and 0.80 (0.44-1.07) when assuming a 3-times higher rate of failure (scenario C). The observed mortality at 3 years was 10.9% in Malawi and Zambia and 8.6% in South Africa (absolute difference 2.3%). The percentage of the mortality difference explained by VL monitoring ranged from 4% (scenario A) to 32% (scenarios B and C combined, assuming a 3-times higher failure rate). Eleven percent was explained by non-HIV related mortality. CONCLUSIONS VL monitoring reduces mortality moderately when assuming improved adherence and decreased failure rates.
Resumo:
BACKGROUND Monitoring of HIV viral load in patients on combination antiretroviral therapy (ART) is not generally available in resource-limited settings. We examined the cost-effectiveness of qualitative point-of-care viral load tests (POC-VL) in sub-Saharan Africa. DESIGN Mathematical model based on longitudinal data from the Gugulethu and Khayelitsha township ART programmes in Cape Town, South Africa. METHODS Cohorts of patients on ART monitored by POC-VL, CD4 cell count or clinically were simulated. Scenario A considered the more accurate detection of treatment failure with POC-VL only, and scenario B also considered the effect on HIV transmission. Scenario C further assumed that the risk of virologic failure is halved with POC-VL due to improved adherence. We estimated the change in costs per quality-adjusted life-year gained (incremental cost-effectiveness ratios, ICERs) of POC-VL compared with CD4 and clinical monitoring. RESULTS POC-VL tests with detection limits less than 1000 copies/ml increased costs due to unnecessary switches to second-line ART, without improving survival. Assuming POC-VL unit costs between US$5 and US$20 and detection limits between 1000 and 10,000 copies/ml, the ICER of POC-VL was US$4010-US$9230 compared with clinical and US$5960-US$25540 compared with CD4 cell count monitoring. In Scenario B, the corresponding ICERs were US$2450-US$5830 and US$2230-US$10380. In Scenario C, the ICER ranged between US$960 and US$2500 compared with clinical monitoring and between cost-saving and US$2460 compared with CD4 monitoring. CONCLUSION The cost-effectiveness of POC-VL for monitoring ART is improved by a higher detection limit, by taking the reduction in new HIV infections into account and assuming that failure of first-line ART is reduced due to targeted adherence counselling.
Resumo:
Background: WHO's 2013 revisions to its Consolidated Guidelines on antiretroviral drugs recommend routine viral load monitoring, rather than clinical or immunological monitoring, as the preferred monitoring approach on the basis of clinical evidence. However, HIV programmes in resource-limited settings require guidance on the most cost-effective use of resources in view of other competing priorities such as expansion of antiretroviral therapy coverage. We assessed the cost-effectiveness of alternative patient monitoring strategies. Methods: We evaluated a range of monitoring strategies, including clinical, CD4 cell count, and viral load monitoring, alone and together, at different frequencies and with different criteria for switching to second-line therapies. We used three independently constructed and validated models simultaneously. We estimated costs on the basis of resource use projected in the models and associated unit costs; we quantified impact as disability-adjusted life years (DALYs) averted. We compared alternatives using incremental cost-effectiveness analysis. Findings: All models show that clinical monitoring delivers significant benefit compared with a hypothetical baseline scenario with no monitoring or switching. Regular CD4 cell count monitoring confers a benefit over clinical monitoring alone, at an incremental cost that makes it affordable in more settings than viral load monitoring, which is currently more expensive. Viral load monitoring without CD4 cell count every 6—12 months provides the greatest reductions in morbidity and mortality, but incurs a high cost per DALY averted, resulting in lost opportunities to generate health gains if implemented instead of increasing antiretroviral therapy coverage or expanding antiretroviral therapy eligibility. Interpretation: The priority for HIV programmes should be to expand antiretroviral therapy coverage, firstly at CD4 cell count lower than 350 cells per μL, and then at a CD4 cell count lower than 500 cells per μL, using lower-cost clinical or CD4 monitoring. At current costs, viral load monitoring should be considered only after high antiretroviral therapy coverage has been achieved. Point-of-care technologies and other factors reducing costs might make viral load monitoring more affordable in future. Funding: Bill & Melinda Gates Foundation, WHO.
Resumo:
Bovine tuberculosis (bTB) caused by Mycobacterium bovis or M. caprae has recently (re-) emerged in livestock and wildlife in all countries bordering Switzerland (CH) and the Principality of Liechtenstein (FL). Comprehensive data for Swiss and Liechtenstein wildlife are not available so far, although two native species, wild boar (Sus scrofa) and red deer (Cervus elaphus elaphus), act as bTB reservoirs elsewhere in continental Europe. Our aims were (1) to assess the occurrence of bTB in these wild ungulates in CH/FL and to reinforce scanning surveillance in all wild mammals; (2) to evaluate the risk of a future bTB reservoir formation in wild boar and red deer in CH/FL. Tissue samples collected from 2009 to 2011 from 434 hunted red deer and wild boar and from eight diseased ungulates with tuberculosis-like lesions were tested by direct real-time PCR and culture to detect mycobacteria of the Mycobacterium tuberculosis complex (MTBC). Identification of suspicious colonies was attempted by real-time PCR, genotyping and spoligotyping. Information on risk factors for bTB maintenance within wildlife populations was retrieved from the literature and the situation regarding identified factors was assessed for our study areas. Mycobacteria of the MTBC were detected in six out of 165 wild boar (3.6%; 95% CI: 1.4-7.8) but none of the 269 red deer (0%; 0-1.4). M. microti was identified in two MTBC-positive wild boar, while species identification remained unsuccessful in four cases. Main risk factors for bTB maintenance worldwide, including different causes of aggregation often resulting from intensive wildlife management, are largely absent in CH and FL. In conclusion, M. bovis and M. caprae were not detected but we report for the first time MTBC mycobacteria in Swiss wild boar. Present conditions seem unfavorable for a reservoir emergence, nevertheless increasing population numbers of wild ungulates and offal consumption may represent a risk.
Resumo:
Abstract Objectives: HIV 'treatment as prevention' (TasP) describes early treatment of HIV-infected patients intended to reduce viral load and transmission. Crucial assumptions for estimating TasP's effectiveness are the underlying estimates of transmission risk. We aimed to determine transmission risk during primary infection, and of the relation of HIV transmission risk to viral load. Design: A systematic review and meta-analysis. Methods: We searched PubMed and Embase databases for studies that established a relationship between viral load and transmission risk, or primary infection and transmission risk, in serodiscordant couples. We analysed assumptions about the relationship between viral load and transmission risk, and between duration of primary infection and transmission risk. Results: We found 36 eligible articles, based on six different study populations. Studies consistently found that larger viral loads lead to higher HIV transmission rates, but assumptions about the shape of this increase varied from exponential increase to saturation. The assumed duration of primary infection ranged from 1.5 to 12 months; for each additional month, the log10 transmission rate ratio between primary and asymptomatic infection decreased by 0.40. Conclusion: Assumptions and estimates of the relationship between viral load and transmission risk, and the relationship between primary infection and transmission risk, vary substantially and predictions of TasP's effectiveness should take this uncertainty into account.
Resumo:
In several studies of antiretroviral treatment (ART) programs for persons with human immunodeficiency virus infection, investigators have reported that there has been a higher rate of loss to follow-up (LTFU) among patients initiating ART in recent years than among patients who initiated ART during earlier time periods. This finding is frequently interpreted as reflecting deterioration of patient retention in the face of increasing patient loads. However, in this paper we demonstrate by simulation that transient gaps in follow-up could lead to bias when standard survival analysis techniques are applied. We created a simulated cohort of patients with different dates of ART initiation. Rates of ART interruption, ART resumption, and mortality were assumed to remain constant over time, but when we applied a standard definition of LTFU, the simulated probability of being classified LTFU at a particular ART duration was substantially higher in recently enrolled cohorts. This suggests that much of the apparent trend towards increased LTFU may be attributed to bias caused by transient interruptions in care. Alternative statistical techniques need to be used when analyzing predictors of LTFU-for example, using "prospective" definitions of LTFU in place of "retrospective" definitions. Similar considerations may apply when analyzing predictors of LTFU from treatment programs for other chronic diseases.
Resumo:
BACKGROUND Rheumatic heart disease accounts for up to 250 000 premature deaths every year worldwide and can be regarded as a physical manifestation of poverty and social inequality. We aimed to estimate the prevalence of rheumatic heart disease in endemic countries as assessed by different screening modalities and as a function of age. METHODS We searched Medline, Embase, the Latin American and Caribbean System on Health Sciences Information, African Journals Online, and the Cochrane Database of Systematic Reviews for population-based studies published between Jan 1, 1993, and June 30, 2014, that reported on prevalence of rheumatic heart disease among children and adolescents (≥5 years to <18 years). We assessed prevalence of clinically silent and clinically manifest rheumatic heart disease in random effects meta-analyses according to screening modality and geographical region. We assessed the association between social inequality and rheumatic heart disease with the Gini coefficient. We used Poisson regression to analyse the effect of age on prevalence of rheumatic heart disease and estimated the incidence of rheumatic heart disease from prevalence data. FINDINGS We included 37 populations in the systematic review and meta-analysis. The pooled prevalence of rheumatic heart disease detected by cardiac auscultation was 2·9 per 1000 people (95% CI 1·7-5·0) and by echocardiography it was 12·9 per 1000 people (8·9-18·6), with substantial heterogeneity between individual reports for both screening modalities (I(2)=99·0% and 94·9%, respectively). We noted an association between social inequality expressed by the Gini coefficient and prevalence of rheumatic heart disease (p=0·0002). The prevalence of clinically silent rheumatic heart disease (21·1 per 1000 people, 95% CI 14·1-31·4) was about seven to eight times higher than that of clinically manifest disease (2·7 per 1000 people, 1·6-4·4). Prevalence progressively increased with advancing age, from 4·7 per 1000 people (95% CI 0·0-11·2) at age 5 years to 21·0 per 1000 people (6·8-35·1) at 16 years. The estimated incidence was 1·6 per 1000 people (0·8-2·3) and remained constant across age categories (range 2·5, 95% CI 1·3-3·7 in 5-year-old children to 1·7, 0·0-5·1 in 15-year-old adolescents). We noted no sex-related differences in prevalence (p=0·829). INTERPRETATION We found a high prevalence of rheumatic heart disease in endemic countries. Although a reduction in social inequalities represents the cornerstone of community-based prevention, the importance of early detection of silent rheumatic heart disease remains to be further assessed. FUNDING UBS Optimus Foundation.
Resumo:
Hepatitis E is considered an emerging human viral disease in industrialized countries. Studies from Switzerland report a human seroprevalence of hepatitis E virus (HEV) of 2.6-21%, a range lower than in adjacent European countries. The aim of this study was to determine whether HEV seroprevalence in domestic pigs and wild boars is also lower in Switzerland and whether it is increasing and thus indicating that this zoonotic viral infection is emerging. Serum samples collected from 2,001 pigs in 2006 and 2011 and from 303 wild boars from 2008 to 2012 were analysed by ELISA for the presence of HEV-specific antibodies. Overall HEV seroprevalence was 58.1% in domestic pigs and 12.5% in wild boars. Prevalence in domestic pigs was significantly higher in 2006 than in 2011. In conclusion, HEV seroprevalence in domestic pigs and wild boars in Switzerland is comparable with the seroprevalence in other countries and not increasing. Therefore, prevalence of HEV in humans must be related to other factors than prevalence in pigs or wild boars.
Resumo:
OBJECTIVES Many paediatric antiretroviral therapy (ART) programmes in Southern Africa rely on CD4⁺ to monitor ART. We assessed the benefit of replacing CD4⁺ by viral load monitoring. DESIGN A mathematical modelling study. METHODS A simulation model of HIV progression over 5 years in children on ART, parameterized by data from seven South African cohorts. We simulated treatment programmes with 6-monthly CD4⁺ or 6- or 12-monthly viral load monitoring. We compared mortality, second-line ART use, immunological failure and time spent on failing ART. In further analyses, we varied the rate of virological failure, and assumed that the rate is higher with CD4⁺ than with viral load monitoring. RESULTS About 7% of children were predicted to die within 5 years, independent of the monitoring strategy. Compared with CD4⁺ monitoring, 12-monthly viral load monitoring reduced the 5-year risk of immunological failure from 1.6 to 1.0% and the mean time spent on failing ART from 6.6 to 3.6 months; 1% of children with CD4⁺ compared with 12% with viral load monitoring switched to second-line ART. Differences became larger when assuming higher rates of virological failure. When assuming higher virological failure rates with CD4⁺ than with viral load monitoring, up to 4.2% of children with CD4⁺ compared with 1.5% with viral load monitoring experienced immunological failure; the mean time spent on failing ART was 27.3 months with CD4⁺ monitoring and 6.0 months with viral load monitoring. Conclusion: Viral load monitoring did not affect 5-year mortality, but reduced time on failing ART, improved immunological response and increased switching to second-line ART.
Resumo:
Mathematical models of disease progression predict disease outcomes and are useful epidemiological tools for planners and evaluators of health interventions. The R package gems is a tool that simulates disease progression in patients and predicts the effect of different interventions on patient outcome. Disease progression is represented by a series of events (e.g., diagnosis, treatment and death), displayed in a directed acyclic graph. The vertices correspond to disease states and the directed edges represent events. The package gems allows simulations based on a generalized multistate model that can be described by a directed acyclic graph with continuous transition-specific hazard functions. The user can specify an arbitrary hazard function and its parameters. The model includes parameter uncertainty, does not need to be a Markov model, and may take the history of previous events into account. Applications are not limited to the medical field and extend to other areas where multistate simulation is of interest. We provide a technical explanation of the multistate models used by gems, explain the functions of gems and their arguments, and show a sample application.
Resumo:
BACKGROUND The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. METHODS We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. RESULTS Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. CONCLUSION Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs.