995 resultados para treatment thresholds


Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration. METHODS AND FINDINGS: Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts <350 cells/μl, bias ranged from -35.2 to +13.1 cells/μl while at counts >350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained. CONCLUSIONS: A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le parasite Varroa destructor provoque depuis plus de 30 ans la perte de nombreuses colonies à travers le monde. L’utilisation d’acaricides de synthèse s’est avérée inefficace au Canada et ailleurs dans le monde à la suite de la sélection de varroas résistants. Dans ce contexte, il est devenu impératif de trouver de nouveaux moyens pour contrôler cette peste apicole. Ce travail original de recherche a pour but de déterminer les paramètres fondamentaux d’une lutte intégrée contre la varroase fondée sur l’utilisation périodique de différents pesticides organiques (l’acide oxalique, l’acide formique et le thymol) associée à des seuils d’interventions. Les seuils d’intervention ont été déterminés à l’aide de régressions linéaires entre les taux de parasitisme par V. destructor et la formance zootechnique des colonies d’abeilles mellifères (production de miel et force des colonies). Un total de 154 colonies d’abeilles du Centre de recherche en sciences animales de Deschambault (CRSAD) ont été suivies de septembre 2005 à septembre 2006. Les seuils calculés et proposés à la suite de cette recherche sont de 2 varroas par jour (chute naturelle) au début mai, 10 varroas par jour à la fin juillet et de 9 varroas par jour au début septembre. L’efficacité des traitements organiques avec l’acide oxalique (AO), l’acide formique (AF) et le thymol a été vérifiée en mai (avant la première miellée) en juillet (entre deux miellées), en septembre (après la miellée et pendant le nourrissage des colonies) et en novembre (avant l’hivernage). L’acide oxalique a été appliqué en utilisant la méthode d’égouttement (4% d’AO p/v dans un sirop de sucrose 1 :1 p/v). L’acide formique a été appliquée sous forme de MiteAwayII™ (tampon commercial imbibé d’AF 65% v/v placé sur le dessus des cadres à couvain), Mitewipe (tampons Dri-Loc™ 10/15cm imbibés de 35 mL d’AF 65% v/v placés sur le dessus des cadres à couvain) ou Flash (AF 65% coulé directement sur le plateau inférieur d’une colonie, 2 mL par cadre avec abeilles). Le thymol a été appliqué sous forme d’Apiguard™ (gélose contenant 25% de thymol p/v placée sur le dessus des cadres à couvain). Les essais d’efficacité ont été réalisés de 2006 à 2008 sur un total de 170 colonies (98 appartenant au CRSAD et 72 appartenant au privé). Les résultats montrent que les traitements de printemps testés ont une faible efficacité pour le contrôle des varroas qui sont en pleine croissance durant cette période. Un traitement avec l’AF à la mi-été permet de réduire les taux de parasites sous le seuil en septembre mais il y a risque de contaminer la récolte de miel avec des résidus d’AF. Les traitements en septembre avec le MiteAwayII™ suivis par un traitement à l’acide oxalique en novembre (5 mL par égouttement entre chaque cadre avec abeilles, 4% d’AO p/v dans un sirop de sucrose 1 :1 p/v) sont les plus efficaces : ils réduisent les niveaux de varroase sous le seuil de 2 varroas par jour au printemps. Nos résultats montrent également que les traitements réalisés tôt en septembre sont plus efficaces et produisent des colonies plus fortes au printemps comparativement à un traitement réalisé un mois plus tard en octobre. En conclusion, ce travail de recherche démontre qu’il est possible de contenir le développement de la varroase dans les ruchers au Québec en utilisant une méthode de lutte intégrée basée sur une combinaison d’applications d’acaricides organiques associée à des seuils d’intervention.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Although CD4 cell count monitoring is used to decide when to start antiretroviral therapy in patients with HIV-1 infection, there are no evidence-based recommendations regarding its optimal frequency. It is common practice to monitor every 3 to 6 months, often coupled with viral load monitoring. We developed rules to guide frequency of CD4 cell count monitoring in HIV infection before starting antiretroviral therapy, which we validated retrospectively in patients from the Swiss HIV Cohort Study. Methodology/Principal Findings We built up two prediction rules (“Snap-shot rule” for a single sample and “Track-shot rule” for multiple determinations) based on a systematic review of published longitudinal analyses of CD4 cell count trajectories. We applied the rules in 2608 untreated patients to classify their 18 061 CD4 counts as either justifiable or superfluous, according to their prior ≥5% or <5% chance of meeting predetermined thresholds for starting treatment. The percentage of measurements that both rules falsely deemed superfluous never exceeded 5%. Superfluous CD4 determinations represented 4%, 11%, and 39% of all actual determinations for treatment thresholds of 500, 350, and 200×106/L, respectively. The Track-shot rule was only marginally superior to the Snap-shot rule. Both rules lose usefulness for CD4 counts coming near to treatment threshold. Conclusions/Significance Frequent CD4 count monitoring of patients with CD4 counts well above the threshold for initiating therapy is unlikely to identify patients who require therapy. It appears sufficient to measure CD4 cell count 1 year after a count >650 for a threshold of 200, >900 for 350, or >1150 for 500×106/L, respectively. When CD4 counts fall below these limits, increased monitoring frequency becomes advisable. These rules offer guidance for efficient CD4 monitoring, particularly in resource-limited settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

- Background Nilotinib and dasatinib are now being considered as alternative treatments to imatinib as a first-line treatment of chronic myeloid leukaemia (CML). - Objective This technology assessment reviews the available evidence for the clinical effectiveness and cost-effectiveness of dasatinib, nilotinib and standard-dose imatinib for the first-line treatment of Philadelphia chromosome-positive CML. - Data sources Databases [including MEDLINE (Ovid), EMBASE, Current Controlled Trials, ClinicalTrials.gov, the US Food and Drug Administration website and the European Medicines Agency website] were searched from search end date of the last technology appraisal report on this topic in October 2002 to September 2011. - Review methods A systematic review of clinical effectiveness and cost-effectiveness studies; a review of surrogate relationships with survival; a review and critique of manufacturer submissions; and a model-based economic analysis. - Results Two clinical trials (dasatinib vs imatinib and nilotinib vs imatinib) were included in the effectiveness review. Survival was not significantly different for dasatinib or nilotinib compared with imatinib with the 24-month follow-up data available. The rates of complete cytogenetic response (CCyR) and major molecular response (MMR) were higher for patients receiving dasatinib than for those with imatinib for 12 months' follow-up (CCyR 83% vs 72%, p < 0.001; MMR 46% vs 28%, p < 0.0001). The rates of CCyR and MMR were higher for patients receiving nilotinib than for those receiving imatinib for 12 months' follow-up (CCyR 80% vs 65%, p < 0.001; MMR 44% vs 22%, p < 0.0001). An indirect comparison analysis showed no difference between dasatinib and nilotinib for CCyR or MMR rates for 12 months' follow-up (CCyR, odds ratio 1.09, 95% CI 0.61 to 1.92; MMR, odds ratio 1.28, 95% CI 0.77 to 2.16). There is observational association evidence from imatinib studies supporting the use of CCyR and MMR at 12 months as surrogates for overall all-cause survival and progression-free survival in patients with CML in chronic phase. In the cost-effectiveness modelling scenario, analyses were provided to reflect the extensive structural uncertainty and different approaches to estimating OS. First-line dasatinib is predicted to provide very poor value for money compared with first-line imatinib, with deterministic incremental cost-effectiveness ratios (ICERs) of between £256,000 and £450,000 per quality-adjusted life-year (QALY). Conversely, first-line nilotinib provided favourable ICERs at the willingness-to-pay threshold of £20,000-30,000 per QALY. - Limitations Immaturity of empirical trial data relative to life expectancy, forcing either reliance on surrogate relationships or cumulative survival/treatment duration assumptions. - Conclusions From the two trials available, dasatinib and nilotinib have a statistically significant advantage compared with imatinib as measured by MMR or CCyR. Taking into account the treatment pathways for patients with CML, i.e. assuming the use of second-line nilotinib, first-line nilotinib appears to be more cost-effective than first-line imatinib. Dasatinib was not cost-effective if decision thresholds of £20,000 per QALY or £30,000 per QALY were used, compared with imatinib and nilotinib. Uncertainty in the cost-effectiveness analysis would be substantially reduced with better and more UK-specific data on the incidence and cost of stem cell transplantation in patients with chronic CML. - Funding The Health Technology Assessment Programme of the National Institute for Health Research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Antireflection coatings at the center wavelength of 1053 nm were prepared on BK7 glasses by electron-beam evaporation deposition (EBD) and ion beam assisted deposition (IBAD). Parts of the two kinds of samples were post-treated with oxygen plasma at the environment temperature after deposition. Absorption at 1064 nm was characterized based on surface thermal lensing (STL) technique. The laser-induced damage threshold (LIDT) was measured by a 1064-nm Nd:YAG laser with a pulse width of 38 ps. Leica-DMRXE Microscope was applied to gain damage morphologies of samples. The results revealed that oxygen post-treatment could lower the absorption and increase the damage thresholds for both kinds of as-grown samples. However, the improving effects are not the same. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Osteoarthritis (OA) is a degenerative joint disease that can result in joint pain, loss of joint function, and deleterious effects on activity levels and lifestyle habits. Current therapies for OA are largely aimed at symptomatic relief and may have limited effects on the underlying cascade of joint degradation. Local drug delivery strategies may provide for the development of more successful OA treatment outcomes that have potential to reduce local joint inflammation, reduce joint destruction, offer pain relief, and restore patient activity levels and joint function. As increasing interest turns toward intra-articular drug delivery routes, parallel interest has emerged in evaluating drug biodistribution, safety, and efficacy in preclinical models. Rodent models provide major advantages for the development of drug delivery strategies, chiefly because of lower cost, successful replication of human OA-like characteristics, rapid disease development, and small joint volumes that enable use of lower total drug amounts during protocol development. These models, however, also offer the potential to investigate the therapeutic effects of local drug therapy on animal behavior, including pain sensitivity thresholds and locomotion characteristics. Herein, we describe a translational paradigm for the evaluation of an intra-articular drug delivery strategy in a rat OA model. This model, a rat interleukin-1beta overexpression model, offers the ability to evaluate anti-interleukin-1 therapeutics for drug biodistribution, activity, and safety as well as the therapeutic relief of disease symptoms. Once the action against interleukin-1 is confirmed in vivo, the newly developed anti-inflammatory drug can be evaluated for evidence of disease-modifying effects in more complex preclinical models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present and experimentally test a theoretical model of majority threshold determination as a function of voters’ risk preferences. The experimental results confirm the theoretical prediction of a positive correlation between the voter's risk aversion and the corresponding preferred majority threshold. Furthermore, the experimental results show that a voter's preferred majority threshold negatively relates to the voter's confidence about how others will vote. Moreover, in a treatment in which individuals receive a private signal about others’ voting behaviour, the confidence-related motivation of behaviour loses ground to the signal's strength.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Cognitive–behavioural therapy (CBT) for childhood anxiety disorders is associated with modest outcomes in the context of parental anxiety disorder. Objectives This study evaluated whether or not the outcome of CBT for children with anxiety disorders in the context of maternal anxiety disorders is improved by the addition of (i) treatment of maternal anxiety disorders, or (ii) treatment focused on maternal responses. The incremental cost-effectiveness of the additional treatments was also evaluated. Design Participants were randomised to receive (i) child cognitive–behavioural therapy (CCBT); (ii) CCBT with CBT to target maternal anxiety disorders [CCBT + maternal cognitive–behavioural therapy (MCBT)]; or (iii) CCBT with an intervention to target mother–child interactions (MCIs) (CCBT + MCI). Setting A NHS university clinic in Berkshire, UK. Participants Two hundred and eleven children with a primary anxiety disorder, whose mothers also had an anxiety disorder. Interventions All families received eight sessions of individual CCBT. Mothers in the CCBT + MCBT arm also received eight sessions of CBT targeting their own anxiety disorders. Mothers in the MCI arm received 10 sessions targeting maternal parenting cognitions and behaviours. Non-specific interventions were delivered to balance groups for therapist contact. Main outcome measures Primary clinical outcomes were the child’s primary anxiety disorder status and degree of improvement at the end of treatment. Follow-up assessments were conducted at 6 and 12 months. Outcomes in the economic analyses were identified and measured using estimated quality-adjusted life-years (QALYs). QALYS were combined with treatment, health and social care costs and presented within an incremental cost–utility analysis framework with associated uncertainty. Results MCBT was associated with significant short-term improvement in maternal anxiety; however, after children had received CCBT, group differences were no longer apparent. CCBT + MCI was associated with a reduction in maternal overinvolvement and more confident expectations of the child. However, neither CCBT + MCBT nor CCBT + MCI conferred a significant post-treatment benefit over CCBT in terms of child anxiety disorder diagnoses [adjusted risk ratio (RR) 1.18, 95% confidence interval (CI) 0.87 to 1.62, p = 0.29; adjusted RR CCBT + MCI vs. control: adjusted RR 1.22, 95% CI 0.90 to 1.67, p = 0.20, respectively] or global improvement ratings (adjusted RR 1.25, 95% CI 1.00 to 1.59, p = 0.05; adjusted RR 1.20, 95% CI 0.95 to 1.53, p = 0.13). CCBT + MCI outperformed CCBT on some secondary outcome measures. Furthermore, primary economic analyses suggested that, at commonly accepted thresholds of cost-effectiveness, the probability that CCBT + MCI will be cost-effective in comparison with CCBT (plus non-specific interventions) is about 75%. Conclusions Good outcomes were achieved for children and their mothers across treatment conditions. There was no evidence of a benefit to child outcome of supplementing CCBT with either intervention focusing on maternal anxiety disorder or maternal cognitions and behaviours. However, supplementing CCBT with treatment that targeted maternal cognitions and behaviours represented a cost-effective use of resources, although the high percentage of missing data on some economic variables is a shortcoming. Future work should consider whether or not effects of the adjunct interventions are enhanced in particular contexts. The economic findings highlight the utility of considering the use of a broad range of services when evaluating interventions with this client group. Trial registration Current Controlled Trials ISRCTN19762288. Funding This trial was funded by the Medical Research Council (MRC) and Berkshire Healthcare Foundation Trust and managed by the National Institute for Health Research (NIHR) on behalf of the MRC–NIHR partnership (09/800/17) and will be published in full in Health Technology Assessment; Vol. 19, No. 38.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a summary of the evidence review group (ERG) report into the clinical effectiveness and cost-effectiveness of ustekinumab for the treatment of moderate to severe psoriasis based upon a review of the manufacturer's submission to the National Institute for Health and Clinical Excellence (NICE) as part of the single technology appraisal (STA) process. The submission's main evidence came from three randomised controlled trials (RCTs), of reasonable methodological quality and measuring a range of clinically relevant outcomes. Higher proportions of participants treated with ustekinumab (45 mg and 90 mg) than with placebo or etanercept achieved an improvement on the Psoriasis Area and Severity Index (PASI) of at least 75% (PASI 75) after 12 weeks. There were also statistically significant differences in favour of ustekinumab over placebo for PASI 50 and PASI 90 results, and for ustekinumab over etanercept for PASI 90 results. A weight-based subgroup dosing analysis for each trial was presented, but the methodology was poorly described and no statistical analysis to support the chosen weight threshold was presented. The manufacturer carried out a mixed treatment comparison (MTC); however, the appropriateness of some of the methodological aspects of the MTC is uncertain. The incidence of adverse events was similar between groups at 12 weeks and withdrawals due to adverse events were low and less frequent in the ustekinumab than in the placebo or etanercept groups; however, statistical comparisons were not reported. The manufacturer's economic model of treatments for psoriasis compared ustekinumab with other biological therapies. The model used a reasonable approach; however, it is not clear whether the clinical effectiveness estimates from the subgroup analysis, used in the base-case analysis, were methodologically appropriate. The base-case incremental cost-effectiveness ratio for ustekinumab versus supportive care was 29,587 pounds per quality-adjusted life-year (QALY). In one-way sensitivity analysis the model was most sensitive to the number of hospital days associated with supportive care, the cost estimate for intermittent etanercept 25 mg and the utility scores used. In the ERG's scenario analysis the model was most sensitive to the price of ustekinumab 90 mg, the proportion of patients with baseline weight > 100 kg and the relative risk of intermittent versus continuous etanercept 25 mg. In the ERG's probabilistic sensitivity analysis ustekinumab had the highest probability of being cost-effective at conventional NICE thresholds, assuming the same price for the 45-mg and 90-mg doses; however, doubling the price of ustekinumab 90 mg resulted in ustekinumab no longer dominating the comparators. In conclusion, the clinical effectiveness and cost-effectiveness of ustekinumab in relation to other drugs in this class is uncertain. Provisional NICE guidance issued as a result of the STA states that ustekinumab is recommended as a treatment option for adults with plaque psoriasis when a number of criteria are met. Final guidance is anticipated in September 2009.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To evaluate a prototype pressure stimulus device for use in the cat and to compare with a known thermal threshold device.Eight healthy adult cats weighing between 3.0 and 4.9 kg.Pressure stimulation was given via a plastic bracelet taped around the forearm. Three 2.4 mm diameter ball bearings, in a 10-mm triangle, were advanced against the craniolateral surface of the antebrachium by manual inflation of a modified blood pressure bladder. Pressure in the cuff was recorded at the end point (leg shake and head turn). Thermal threshold was also tested. Stimuli were stopped if they reached 55 degrees C or 450 mmHg without response. After four pressure and thermal threshold baselines, each cat received SC buprenorphine 0.01 mg kg(-1), carprofen 4 mg kg(-1) or saline 0.3 mL in a three period cross-over study with a 1-week interval. The investigator was blinded to the treatment. Measurements were made at 0.25. 0.5, 0.75, 1, 2, 3, 4, 6, 8, and 24 hours after injection. Data were analyzed by using ANOVA.There were no significant changes in thermal or pressure threshold after administration of saline or carprofen, but thermal threshold increased from 60 minutes until 8 hours after administration of buprenorphine (p < 0.05). The maximum increase in threshold from baseline (Delta T-max) was 3.5 +/- 3.1 degrees C at 2 hours. Pressure threshold increased 2 hours after administration of buprenorphine (p < 0.05) when the increase in threshold above baseline (Delta P-max) was 162 +/- 189 mmHg.This pressure device resulted in thresholds that were affected by analgesic treatment in a similar manner but to a lesser degree than the thermal method. Pressure stimulation may be a useful additional method for analgesic studies in cats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To measure cutaneous electrical nociceptive thresholds in relation to known thermal and mechanical stimulation for nociceptive threshold detection in cats.Study design Prospective, blinded, randomized cross-over study with 1-week washout interval.Animals Eight adult cats [bodyweight 5.1 +/- 1.8 kg (mean + SD)].Methods Mechanical nociceptive thresholds were tested using a step-wise manual inflation of a modified blood pressure bladder attached to the cat's thoracic limb. Thermal nociceptive thresholds were measured by increasing the temperature of a probe placed on the thorax. The electrical nociceptive threshold was tested using an escalating current from a constant current generator passed between electrodes placed on the thoracic region. A positive response (threshold) was recorded when cats displayed any or all of the following behaviors: leg shake, head turn, avoidance, or vocalization. Four baseline readings were performed before intramuscular injection of meperidine (5 mg kg(-1)) or an equal volume of saline. Threshold recordings with each modality were made at 15, 30, 45, 60, 90, and 120 minutes post-injection. Data were analyzed using ANOVA and paired t-tests (significance at p < 0.05).Results There were no significant changes in thermal, mechanical, or electrical thresholds after saline. Thermal thresholds increased at 15-60 minutes (p < 0.01) and mechanical threshold increased at 30 and 45 minutes after meperidine (p < 0.05). Maximum thermal threshold was +4.1 +/- 0.3 degrees C above baseline at 15 minutes while maximum mechanical threshold was 296 +/- 265 mmHg above baseline at 30 minutes after meperidine. Electrical thresholds following meperidine were not significantly different than baseline (p > 0.05). Thermal and electrical thresholds after meperidine were significantly higher than saline at 30 and 45 minutes (p < 0.05), and at 120 minutes (p < 0.05), respectively. Mechanical thresholds were significantly higher than saline treatment at 30 minutes (p <= 0.05).Conclusion and clinical relevance Electrical stimulation did not detect meperidine analgesia whereas both thermal and mechanical thresholds changed after meperidine administration in cats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims at finding out the threshold to burning in surface grinding process. Acoustic emission and electric power signals are acquired from an analog-digital converter and processed through algorithms in order to generate a control signal to inform the operator or interrupt the process in the case of burning occurrence. The thresholds that dictate the situation of burn and non-burn were studied as well as a comparison between the two parameters was carried out. In the experimental work one type of steel (ABNT-1045 annealed) and one type of grinding wheel referred to as TARGA model 3TG80.3-NV were employed. Copyright © 2005 by ABCM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context Long-term antiretroviral therapy (ART) use in resource-limited countries leads to increasing numbers of patients with HIV taking second-line therapy. Limited access to further therapeutic options makes essential the evaluation of second-line regimen efficacy in these settings. Objectives To investigate failure rates in patients receiving second-line therapy and factors associated with failure and death. Design, Setting, and Participants Multicohort study of 632 patients >14 years old receiving second-line therapy for more than 6 months in 27 ART programs in Africa and Asia between January 2001 and October 2008. Main Outcome Measures Clinical, immunological, virological, and immunovirological failure (first diagnosed episode of immunological or virological failure) rates, and mortality after 6 months of second-line therapy use. Sensitivity analyses were performed using alternative CD4 cell count thresholds for immunological and immunovirological definitions of failure and for cohort attrition instead of death. Results The 632 patients provided 740.7 person-years of follow-up; 119 (18.8%) met World Health Organization failure criteria after a median 11.9 months following the start of second-line therapy (interquartile range [IQR], 8.7-17.0 months), and 34 (5.4%) died after a median 15.1 months (IQR, 11.9-25.7 months). Failure rates were lower in those who changed 2 nucleoside reverse transcriptase inhibitors (NRTIs) instead of 1 (179.2 vs 251.6 per 1000 person-years; incidence rate ratio [IRR], 0.64; 95% confidence interval [CI], 0.42-0.96), and higher in those with lowest adherence index (383.5 vs 176.0 per 1000 person-years; IRR, 3.14; 95% CI, 1.67-5.90 for <80% vs ≥95% [percentage adherent, as represented by percentage of appointments attended with no delay]). Failure rates increased with lower CD4 cell counts when second-line therapy was started, from 156.3 vs 96.2 per 1000 person-years; IRR, 1.59 (95% CI, 0.78-3.25) for 100 to 199/μL to 336.8 per 1000 person-years; IRR, 3.32 (95% CI, 1.81-6.08) for less than 50/μL vs 200/μL or higher; and decreased with time using second-line therapy, from 250.0 vs 123.2 per 1000 person-years; IRR, 1.90 (95% CI, 1.19-3.02) for 6 to 11 months to 212.0 per 1000 person-years; 1.71 (95% CI, 1.01-2.88) for 12 to 17 months vs 18 or more months. Mortality for those taking second-line therapy was lower in women (32.4 vs 68.3 per 1000 person-years; hazard ratio [HR], 0.45; 95% CI, 0.23-0.91); and higher in patients with treatment failure of any type (91.9 vs 28.1 per 1000 person-years; HR, 2.83; 95% CI, 1.38-5.80). Sensitivity analyses showed similar results. Conclusions Among patients in Africa and Asia receiving second-line therapy for HIV, treatment failure was associated with low CD4 cell counts at second-line therapy start, use of suboptimal second-line regimens, and poor adherence. Mortality was associated with diagnosed treatment failure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The use of combination antiretroviral therapy (cART) comprising three antiretroviral medications from at least two classes of drugs is the current standard treatment for HIV infection in adults and children. Current World Health Organization (WHO) guidelines for antiretroviral therapy recommend early treatment regardless of immunologic thresholds or the clinical condition for all infants (less than one years of age) and children under the age of two years. For children aged two to five years current WHO guidelines recommend (based on low quality evidence) that clinical and immunological thresholds be used to identify those who need to start cART (advanced clinical stage or CD4 counts ≤ 750 cells/mm(3) or per cent CD4 ≤ 25%). This Cochrane review will inform the current available evidence regarding the optimal time for treatment initiation in children aged two to five years with the goal of informing the revision of WHO 2013 recommendations on when to initiate cART in children. OBJECTIVES To assess the evidence for the optimal time to initiate cART in treatment-naive, HIV-infected children aged 2 to 5 years. SEARCH METHODS We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, the AEGIS conference database, specific relevant conferences, www.clinicaltrials.gov, the World Health Organization International Clinical Trials Registry platform and reference lists of articles. The date of the most recent search was 30 September 2012. SELECTION CRITERIA Randomised controlled trials (RCTs) that compared immediate with deferred initiation of cART, and prospective cohort studies which followed children from enrolment to start of cART and on cART. DATA COLLECTION AND ANALYSIS Two review authors considered studies for inclusion in the review, assessed the risk of bias, and extracted data on the primary outcome of death from all causes and several secondary outcomes, including incidence of CDC category C and B clinical events and per cent CD4 cells (CD4%) at study end. For RCTs we calculated relative risks (RR) or mean differences with 95% confidence intervals (95% CI). For cohort data, we extracted relative risks with 95% CI from adjusted analyses. We combined results from RCTs using a random effects model and examined statistical heterogeneity. MAIN RESULTS Two RCTs in HIV-positive children aged 1 to 12 years were identified. One trial was the pilot study for the larger second trial and both compared initiation of cART regardless of clinical-immunological conditions with deferred initiation until per cent CD4 dropped to <15%. The two trials were conducted in Thailand, and Thailand and Cambodia, respectively. Unpublished analyses of the 122 children enrolled at ages 2 to 5 years were included in this review. There was one death in the immediate cART group and no deaths in the deferred group (RR 2.9; 95% CI 0.12 to 68.9). In the subgroup analysis of children aged 24 to 59 months, there was one CDC C event in each group (RR 0.96; 95% CI 0.06 to 14.87) and 8 and 11 CDC B events in the immediate and deferred groups respectively (RR 0.95; 95% CI 0.24 to 3.73). In this subgroup, the mean difference in CD4 per cent at study end was 5.9% (95% CI 2.7 to 9.1). One cohort study from South Africa, which compared the effect of delaying cART for up to 60 days in 573 HIV-positive children starting tuberculosis treatment (median age 3.5 years), was also included. The adjusted hazard ratios for the effect on mortality of delaying ART for more than 60 days was 1.32 (95% CI 0.55 to 3.16). AUTHORS' CONCLUSIONS This systematic review shows that there is insufficient evidence from clinical trials in support of either early or CD4-guided initiation of ART in HIV-infected children aged 2 to 5 years. Programmatic issues such as the retention in care of children in ART programmes in resource-limited settings will need to be considered when formulating WHO 2013 recommendations.