3 resultados para sequencing error
Resumo:
OBJECTIVE: To explore the potential of deep HIV-1 sequencing for adding clinically relevant information relative to viral population sequencing in heavily pre-treated HIV-1-infected subjects. METHODS: In a proof-of-concept study, deep sequencing was compared to population sequencing in HIV-1-infected individuals with previous triple-class virological failure who also developed virologic failure to deep salvage therapy including, at least, darunavir, tipranavir, etravirine or raltegravir. Viral susceptibility was inferred before salvage therapy initiation and at virological failure using deep and population sequencing genotypes interpreted with the HIVdb, Rega and ANRS algorithms. The threshold level for mutant detection with deep sequencing was 1%. RESULTS: 7 subjects with previous exposure to a median of 15 antiretrovirals during a median of 13 years were included. Deep salvage therapy included darunavir, tipranavir, etravirine or raltegravir in 4, 2, 2 and 5 subjects, respectively. Self-reported treatment adherence was adequate in 4 and partial in 2; one individual underwent treatment interruption during follow-up. Deep sequencing detected all mutations found by population sequencing and identified additional resistance mutations in all but one individual, predominantly after virological failure to deep salvage therapy. Additional genotypic information led to consistent decreases in predicted susceptibility to etravirine, efavirenz, nucleoside reverse transcriptase inhibitors and indinavir in 2, 1, 2 and 1 subject, respectively. Deep sequencing data did not consistently modify the susceptibility predictions achieved with population sequencing for darunavir, tipranavir or raltegravir. CONCLUSIONS: In this subset of heavily pre-treated individuals, deep sequencing improved the assessment of genotypic resistance to etravirine, but did not consistently provide additional information on darunavir, tipranavir or raltegravir susceptibility. These data may inform the design of future studies addressing the clinical value of minority drug-resistant variants in treatment-experienced subjects.
Resumo:
BACKGROUND Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. METHODS Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. DISCUSSION This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.