69 resultados para requirement-based testing
Resumo:
The time passed since the infection of a human immunodeficiency virus (HIV)-infected individual (the age of infection) is an important but often only poorly known quantity. We assessed whether the fraction of ambiguous nucleotides obtained from bulk sequencing as done for genotypic resistance testing can serve as a proxy of this parameter.
Resumo:
Background Replicative phenotypic HIV resistance testing (rPRT) uses recombinant infectious virus to measure viral replication in the presence of antiretroviral drugs. Due to its high sensitivity of detection of viral minorities and its dissecting power for complex viral resistance patterns and mixed virus populations rPRT might help to improve HIV resistance diagnostics, particularly for patients with multiple drug failures. The aim was to investigate whether the addition of rPRT to genotypic resistance testing (GRT) compared to GRT alone is beneficial for obtaining a virological response in heavily pre-treated HIV-infected patients. Methods Patients with resistance tests between 2002 and 2006 were followed within the Swiss HIV Cohort Study (SHCS). We assessed patients' virological success after their antiretroviral therapy was switched following resistance testing. Multilevel logistic regression models with SHCS centre as a random effect were used to investigate the association between the type of resistance test and virological response (HIV-1 RNA <50 copies/mL or ≥1.5log reduction). Results Of 1158 individuals with resistance tests 221 with GRT+rPRT and 937 with GRT were eligible for analysis. Overall virological response rates were 85.1% for GRT+rPRT and 81.4% for GRT. In the subgroup of patients with >2 previous failures, the odds ratio (OR) for virological response of GRT+rPRT compared to GRT was 1.45 (95% CI 1.00-2.09). Multivariate analyses indicate a significant improvement with GRT+rPRT compared to GRT alone (OR 1.68, 95% CI 1.31-2.15). Conclusions In heavily pre-treated patients rPRT-based resistance information adds benefit, contributing to a higher rate of treatment success.
Resumo:
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Resumo:
Cytomegalovirus (CMV) infection is associated with significant morbidity and mortality in transplant recipients. Resistance against ganciclovir is increasingly observed. According to current guidelines, direct drug resistance testing is not always performed due to high costs and work effort, even when resistance is suspected.
Resumo:
Development of novel implants in orthopaedic trauma surgery is based on limited datasets of cadaver trials or artificial bone models. A method has been developed whereby implants can be constructed in an evidence based method founded on a large anatomic database consisting of more than 2.000 datasets of bones extracted from CT scans. The aim of this study was the development and clinical application of an anatomically pre-contoured plate for the treatment of distal fibular fractures based on the anatomical database. 48 Caucasian and Asian bone models (left and right) from the database were used for the preliminary optimization process and validation of the fibula plate. The implant was constructed to fit bilaterally in a lateral position of the fibula. Then a biomechanical comparison of the designed implant to the current gold standard in the treatment of distal fibular fractures (locking 1/3 tubular plate) was conducted. Finally, a clinical surveillance study to evaluate the grade of implant fit achieved was performed. The results showed that with a virtual anatomic database it was possible to design a fibula plate with an optimized fit for a large proportion of the population. Biomechanical testing showed the novel fibula plate to be superior to 1/3 tubular plates in 4-point bending tests. The clinical application showed a very high degree of primary implant fit. Only in a small minority of cases further intra-operative implant bending was necessary. Therefore, the goal to develop an implant for the treatment of distal fibular fractures based on the evidence of a large anatomical database could be attained. Biomechanical testing showed good results regarding the stability and the clinical application confirmed the high grade of anatomical fit.
Resumo:
Traditionally, the routine artificial digestion test is applied to assess the presence of Trichinella larvae in pigs. However, this diagnostic method has a low sensitivity compared to serological tests. The results from artificial digestion tests in Switzerland were evaluated over a time period of 15 years to determine by when freedom from infection based on these data could be confirmed. Freedom was defined as a 95% probability that the prevalence of infection was below 0.0001%. Freedom was demonstrated after 12 years at the latest. A new risk-based surveillance approach was then developed based on serology. Risk-based surveillance was also assessed over 15 years, starting in 2010. It was shown that by using this design, the sample size could be reduced by at least a factor of 4 when compared with the traditional testing regimen, without lowering the level of confidence in the Trichinella-free status of the pig population.
Resumo:
Background Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score) provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident (< = 12 m) or older infection by 26 different algorithms. Incident infection rates (IIR) were calculated based on diagnostic sensitivity and specificity of each algorithm and the rule that the total of incident results is the sum of true-incident and false-incident results, which can be calculated by means of the pre-determined sensitivity and specificity. Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline) and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and sampling bias.
Resumo:
Point-of-care testing (POCT) remains under scrutiny by healthcare professionals because of its ill-tried, young history. POCT methods are being developed by a few major equipment companies based on rapid progress in informatics and nanotechnology. Issues as POCT quality control, comparability with standard laboratory procedures, standardisation, traceability and round robin testing are being left to hospitals. As a result, the clinical and operational benefits of POCT were first evident for patients on the operating table. For the management of cardiovascular surgery patients, POCT technology is an indispensable aid. Improvement of the technology has meant that clinical laboratory pathologists now recognise the need for POCT beyond their high-throughput areas.
Resumo:
BACKGROUND: We sought to characterize the impact that hepatitis C virus (HCV) infection has on CD4 cells during the first 48 weeks of antiretroviral therapy (ART) in previously ART-naive human immunodeficiency virus (HIV)-infected patients. METHODS: The HIV/AIDS Drug Treatment Programme at the British Columbia Centre for Excellence in HIV/AIDS distributes all ART in this Canadian province. Eligible individuals were those whose first-ever ART included 2 nucleoside reverse transcriptase inhibitors and either a protease inhibitor or a nonnucleoside reverse transcriptase inhibitor and who had a documented positive result for HCV antibody testing. Outcomes were binary events (time to an increase of > or = 75 CD4 cells/mm3 or an increase of > or = 10% in the percentage of CD4 cells in the total T cell population [CD4 cell fraction]) and continuous repeated measures. Statistical analyses used parametric and nonparametric methods, including multivariate mixed-effects linear regression analysis and Cox proportional hazards analysis. RESULTS: Of 1186 eligible patients, 606 (51%) were positive and 580 (49%) were negative for HCV antibodies. HCV antibody-positive patients were slower to have an absolute (P<.001) and a fraction (P = .02) CD4 cell event. In adjusted Cox proportional hazards analysis (controlling for age, sex, baseline absolute CD4 cell count, baseline pVL, type of ART initiated, AIDS diagnosis at baseline, adherence to ART regimen, and number of CD4 cell measurements), HCV antibody-positive patients were less likely to have an absolute CD4 cell event (adjusted hazard ratio [AHR], 0.84 [95% confidence interval [CI], 0.72-0.98]) and somewhat less likely to have a CD4 cell fraction event (AHR, 0.89 [95% CI, 0.70-1.14]) than HCV antibody-negative patients. In multivariate mixed-effects linear regression analysis, HCV antibody-negative patients had increases of an average of 75 cells in the absolute CD4 cell count and 4.4% in the CD4 cell fraction, compared with 20 cells and 1.1% in HCV antibody-positive patients, during the first 48 weeks of ART, after adjustment for time-updated pVL, number of CD4 cell measurements, and other factors. CONCLUSION: HCV antibody-positive HIV-infected patients may have an altered immunologic response to ART.
Resumo:
BACKGROUND: The advent of urine testing for Chlamydia trachomatis has raised the possibility of large-scale screening for this sexually transmitted infection, which is now the most common in the United Kingdom. The purpose of this study was to investigate the effect of an invitation to be screened for chlamydia and of receiving a negative result on levels of anxiety, depression and self-esteem. METHODS: 19,773 men and women aged 16 to 39 years, selected at random from 27 general practices in two large city areas (Bristol and Birmingham) were invited by post to send home-collected urine samples or vulvo-vaginal swabs for chlamydia testing. Questionnaires enquiring about anxiety, depression and self-esteem were sent to random samples of those offered screening: one month before the dispatch of invitations; when participants returned samples; and after receiving a negative result. RESULTS: Home screening was associated with an overall reduction in anxiety scores. An invitation to participate did not increase anxiety levels. Anxiety scores in men were lower after receiving the invitation than at baseline. Amongst women anxiety was reduced after receipt of negative test results. Neither depression nor self-esteem scores were affected by screening. CONCLUSION: Postal screening for chlamydia does not appear to have a negative impact on overall psychological well-being and can lead to a decrease in anxiety levels among respondents. There is, however, a clear difference between men and women in when this reduction occurs.
Resumo:
BACKGROUND: Short-acting agents for neuromuscular block (NMB) require frequent dosing adjustments for individual patient's needs. In this study, we verified a new closed-loop controller for mivacurium dosing in clinical trials. METHODS: Fifteen patients were studied. T1% measured with electromyography was used as input signal for the model-based controller. After induction of propofol/opiate anaesthesia, stabilization of baseline electromyography signal was awaited and a bolus of 0.3 mg kg-1 mivacurium was then administered to facilitate endotracheal intubation. Closed-loop infusion was started thereafter, targeting a neuromuscular block of 90%. Setpoint deviation, the number of manual interventions and surgeon's complaints were recorded. Drug use and its variability between and within patients were evaluated. RESULTS: Median time of closed-loop control for the 11 patients included in the data processing was 135 [89-336] min (median [range]). Four patients had to be excluded because of sensor problems. Mean absolute deviation from setpoint was 1.8 +/- 0.9 T1%. Neither manual interventions nor complaints from the surgeons were recorded. Mean necessary mivacurium infusion rate was 7.0 +/- 2.2 microg kg-1 min-1. Intrapatient variability of mean infusion rates over 30-min interval showed high differences up to a factor of 1.8 between highest and lowest requirement in the same patient. CONCLUSIONS: Neuromuscular block can precisely be controlled with mivacurium using our model-based controller. The amount of mivacurium needed to maintain T1% at defined constant levels differed largely between and within patients. Closed-loop control seems therefore advantageous to automatically maintain neuromuscular block at constant levels.
Resumo:
INTRODUCTION: Osteoporosis is not only responsible for an increased number of metaphyseal and spinal fractures but it also complicates their treatment. To prevent the initial loosening, we developed a new implant with an enlarged implant/bone interface based on the concept of perforated, hollow cylinders. We evaluated whether osseointegration of a hollow cylinder based implant takes place in normal or osteoporotic bone of sheep under functional loading conditions during anterior stabilization of the lumbar spine. MATERIALS AND METHODS: Osseointegration of the cylinders and status of the fused segments (ventral corpectomy, replacement with iliac strut, and fixation with testing implant) were investigated in six osteoporotic (age 6.9 +/- 0.8 years, mean body weight 61.1 +/- 5.2 kg) and seven control sheep (age 6.1 +/- 0.2 years, mean body weight 64.9 +/- 5.7 kg). Osteoporosis was introduced using a combination protocol of ovariectomy, high-dose prednisone, calcium and phosphor reduced diet and movement restriction. Osseointegration was quantified using fluorescence and conventional histology; fusion status was determined using biomechanical testing of the stabilized segment in a six-degree-of-freedom loading device as well as with radiological and histological staging. RESULTS: Intact bone trabeculae were found in 70% of all perforations without differences between the two groups (P = 0.26). Inside the cylinders, bone volume/total volume was significantly higher than in the control vertebra (50 +/- 16 vs. 28 +/- 13%) of the same animal (P<0.01), but significantly less (P<0.01) than in the near surrounding (60 +/- 21%). After biomechanical testing as described in Sect. "Materials and methods", seven spines (three healthy and four osteoporotic) were classified as completely fused and six (four healthy and two osteoporotic) as not fused after a 4-month observation time. All endplates were bridged with intact trabeculae in the histological slices. CONCLUSIONS: The high number of perforations, filled with intact trabeculae, indicates an adequate fixation; bridging trabeculae between adjacent endplates and tricortical iliac struts in all vertebrae indicates that the anchorage is adequate to promote fusion in this animal model, even in the osteoporotic sheep.
Resumo:
OBJECTIVE: This study aimed to assess the potential cost-effectiveness of testing patients with nephropathies for the I/D polymorphism before starting angiotensin-converting enzyme (ACE) inhibitor therapy, using a 3-year time horizon and a healthcare perspective. METHODS: We used a combination of a decision analysis and Markov modeling technique to evaluate the potential economic value of this pharmacogenetic test by preventing unfavorable treatment in patients with nephropathies. The estimation of the predictive value of the I/D polymorphism is based on a systematic review showing that DD carriers tend to respond well to ACE inhibitors, while II carriers seem not to benefit adequately from this treatment. Data on the ACE inhibitor effectiveness in nephropathy were derived from the REIN (Ramipril Efficacy in Nephropathy) trial. We calculated the number of patients with end-stage renal disease (ESRD) prevented and the differences in the incremental costs and incremental effect expressed as life-years free of ESRD. A probabilistic sensitivity analysis was conducted to determine the robustness of the results. RESULTS: Compared with unselective treatment, testing patients for their ACE genotype could save 12 patients per 1000 from developing ESRD during the 3 years covered by the model. As the mean net cost savings was euro 356,000 per 1000 patient-years, and 9 life-years free of ESRD were gained, selective treatment seems to be dominant. CONCLUSION: The study suggests that genetic testing of the I/D polymorphism in patients with nephropathy before initiating ACE therapy will most likely be cost-effective, even if the risk for II carriers to develop ESRD when treated with ACE inhibitors is only 1.4% higher than for DD carriers. Further studies, however, are required to corroborate the difference in treatment response between ACE genotypes, before genetic testing can be justified in clinical practice.
Testing the structural and cross-cultural validity of the KIDSCREEN-27 quality of life questionnaire
Resumo:
OBJECTIVES: The aim of this study is to assess the structural and cross-cultural validity of the KIDSCREEN-27 questionnaire. METHODS: The 27-item version of the KIDSCREEN instrument was derived from a longer 52-item version and was administered to young people aged 8-18 years in 13 European countries in a cross-sectional survey. Structural and cross-cultural validity were tested using multitrait multi-item analysis, exploratory and confirmatory factor analysis, and Rasch analyses. Zumbo's logistic regression method was applied to assess differential item functioning (DIF) across countries. Reliability was assessed using Cronbach's alpha. RESULTS: Responses were obtained from n = 22,827 respondents (response rate 68.9%). For the combined sample from all countries, exploratory factor analysis with procrustean rotations revealed a five-factor structure which explained 56.9% of the variance. Confirmatory factor analysis indicated an acceptable model fit (RMSEA = 0.068, CFI = 0.960). The unidimensionality of all dimensions was confirmed (INFIT: 0.81-1.15). Differential item functioning (DIF) results across the 13 countries showed that 5 items presented uniform DIF whereas 10 displayed non-uniform DIF. Reliability was acceptable (Cronbach's alpha = 0.78-0.84 for individual dimensions). CONCLUSIONS: There was substantial evidence for the cross-cultural equivalence of the KIDSCREEN-27 across the countries studied and the factor structure was highly replicable in individual countries. Further research is needed to correct scores based on DIF results. The KIDSCREEN-27 is a new short and promising tool for use in clinical and epidemiological studies.
Resumo:
INTRODUCTION: Cognitive complaints, such as poor concentration and memory deficits, are frequent after whiplash injury and play an important role in disability. The origin of these complaints is discussed controversially. Some authors postulate brain lesions as a consequence of whiplash injuries. Potential diffuse axonal injury (DAI) with subsequent atrophy of the brain and ventricular expansion is of particular interest as focal brain lesions have not been documented so far in whiplash injury. OBJECTIVE: To investigate whether traumatic brain injury can be identified using a magnetic resonance (MR)-based quantitative analysis of normalized ventricle-brain ratios (VBR) in chronic whiplash patients with subjective cognitive impairment that cannot be objectively confirmed by neuropsychological testing. MATERIALS AND METHODS: MR examination was performed in 21 patients with whiplash injury and symptom persistence for 9 months on average and in 18 matched healthy controls. Conventional MR imaging (MRI) was used to assess the volumes of grey and white matter and of ventricles. The normalized VBR was calculated. RESULTS: The values of normalized VBR did not differ in whiplash patients when compared with that in healthy controls (F = 0.216, P = 0.645). CONCLUSIONS: This study does not support loss of brain tissue following whiplash injury as measured by VBR. On this basis, traumatic brain injury with subsequent DAI does not seem to be the underlying mechanism for persistent concentration and memory deficits that are subjectively reported but not objectively verifiable as neuropsychological deficits.