92 resultados para STANDARDIZED ASSESSMENT
em Université de Lausanne, Switzerland
Resumo:
BACKGROUND: Adequate pain assessment is critical for evaluating the efficacy of analgesic treatment in clinical practice and during the development of new therapies. Yet the currently used scores of global pain intensity fail to reflect the diversity of pain manifestations and the complexity of underlying biological mechanisms. We have developed a tool for a standardized assessment of pain-related symptoms and signs that differentiates pain phenotypes independent of etiology. METHODS AND FINDINGS: Using a structured interview (16 questions) and a standardized bedside examination (23 tests), we prospectively assessed symptoms and signs in 130 patients with peripheral neuropathic pain caused by diabetic polyneuropathy, postherpetic neuralgia, or radicular low back pain (LBP), and in 57 patients with non-neuropathic (axial) LBP. A hierarchical cluster analysis revealed distinct association patterns of symptoms and signs (pain subtypes) that characterized six subgroups of patients with neuropathic pain and two subgroups of patients with non-neuropathic pain. Using a classification tree analysis, we identified the most discriminatory assessment items for the identification of pain subtypes. We combined these six interview questions and ten physical tests in a pain assessment tool that we named Standardized Evaluation of Pain (StEP). We validated StEP for the distinction between radicular and axial LBP in an independent group of 137 patients. StEP identified patients with radicular pain with high sensitivity (92%; 95% confidence interval [CI] 83%-97%) and specificity (97%; 95% CI 89%-100%). The diagnostic accuracy of StEP exceeded that of a dedicated screening tool for neuropathic pain and spinal magnetic resonance imaging. In addition, we were able to reproduce subtypes of radicular and axial LBP, underscoring the utility of StEP for discerning distinct constellations of symptoms and signs. CONCLUSIONS: We present a novel method of identifying pain subtypes that we believe reflect underlying pain mechanisms. We demonstrate that this new approach to pain assessment helps separate radicular from axial back pain. Beyond diagnostic utility, a standardized differentiation of pain subtypes that is independent of disease etiology may offer a unique opportunity to improve targeted analgesic treatment.
Resumo:
BACKGROUND: The aim of this study was to assess whether virtual reality (VR) can discriminate between the skills of novices and intermediate-level laparoscopic surgical trainees (construct validity), and whether the simulator assessment correlates with an expert's evaluation of performance. METHODS: Three hundred and seven (307) participants of the 19th-22nd Davos International Gastrointestinal Surgery Workshops performed the clip-and-cut task on the Xitact LS 500 VR simulator (Xitact S.A., Morges, Switzerland). According to their previous experience in laparoscopic surgery, participants were assigned to the basic course (BC) or the intermediate course (IC). Objective performance parameters recorded by the simulator were compared to the standardized assessment by the course instructors during laparoscopic pelvitrainer and conventional surgery exercises. RESULTS: IC participants performed significantly better on the VR simulator than BC participants for the task completion time as well as the economy of movement of the right instrument, not the left instrument. Participants with maximum scores in the pelvitrainer cholecystectomy task performed the VR trial significantly faster, compared to those who scored less. In the conventional surgery task, a significant difference between those who scored the maximum and those who scored less was found not only for task completion time, but also for economy of movement of the right instrument. CONCLUSIONS: VR simulation provides a valid assessment of psychomotor skills and some basic aspects of spatial skills in laparoscopic surgery. Furthermore, VR allows discrimination between trainees with different levels of experience in laparoscopic surgery establishing construct validity for the Xitact LS 500 clip-and-cut task. Virtual reality may become the gold standard to assess and monitor surgical skills in laparoscopic surgery.
Resumo:
BACKGROUND: Long-lasting food impactions requiring endoscopic bolus removal occur frequently in patients with eosinophilic esophagitis (EoE) and harbor a risk for severe esophageal injuries. We evaluated whether treatment with swallowed topical corticosteroids is able to reduce the risk of occurrence of this complication. METHODS: We analyzed data from the Swiss EoE Cohort Study. Patients with yearly clinic visits, during which standardized assessment of symptoms, endoscopic, histologic, and laboratory findings was carried out, were included. RESULTS: A total of 206 patients (157 males) were analyzed. The median follow-up time was 5 years with a total of 703 visits (mean 3.41 visits/patient). During the follow-up period, 33 patients (16 % of the cohort) experienced 42 impactions requiring endoscopic bolus removal. We evaluated the following factors regarding the outcome 'bolus impaction' by univariate logistic regression modeling: swallowed topical corticosteroid therapy (OR 0.503, 95%-CI 0.255-0.993, P = 0.048), presence of EoE symptoms (OR 1.150, 95%-CI 0.4668-2.835, P = 0.761), esophageal stricture (OR 2.832, 95%-CI 1.508-5.321, P = 0.001), peak eosinophil count >10 eosinophils/HPF (OR 0.724, 95%-CI 0.324-1.621, P = 0.433), blood eosinophilia (OR 1.532, 95%-CI 0.569-4.118, P = 0.398), and esophageal dilation (OR 1.852, 95%-CI 1.034-3.755, P = 0.017). In the multivariate model, the following factors were significantly associated with bolus impaction: swallowed topical corticosteroid therapy (OR 0.411, 95%-CI 0.203-0.835, P = 0.014) and esophageal stricture (OR 2.666, 95%-CI 1.259-5.645, P = 0.01). Increasing frequency of use of swallowed topical steroids was associated with a lower risk for bolus impactions. CONCLUSIONS: Treatment of EoE with swallowed topical corticosteroids significantly reduces the risk for long-lasting bolus impactions.
Resumo:
An in vitro angiotensin II (AngII) receptor-binding assay was developed to monitor the degree of receptor blockade in standardized conditions. This in vitro method was validated by comparing its results with those obtained in vivo with the injection of exogenous AngII and the measurement of the AngII-induced changes in systolic blood pressure. For this purpose, 12 normotensive subjects were enrolled in a double-blind, four-way cross-over study comparing the AngII receptor blockade induced by a single oral dose of losartan (50 mg), valsartan (80 mg), irbesartan (150 mg), and placebo. A significant linear relationship between the two methods was found (r = 0.723, n = 191, P<.001). However, there exists a wide scatter of the in vivo data in the absence of active AngII receptor blockade. Thus, the relationship between the two methods is markedly improved (r = 0.87, n = 47, P<.001) when only measurements done 4 h after administration of the drugs are considered (maximal antagonist activity observed in vivo) suggesting that the two methods are equally effective in assessing the degree of AT-1 receptor blockade, but with a greatly reduced variability in the in vitro assay. In addition, the pharmacokinetic/pharmacodynamic analysis performed with the three antagonists suggest that the AT-1 receptor-binding assay works as a bioassay that integrates the antagonistic property of all active drug components of the plasma. This standardized in vitro-binding assay represents a simple, reproducible, and precise tool to characterize the pharmacodynamic profile of AngII receptor antagonists in humans.
Resumo:
PURPOSE: Cardiovascular magnetic resonance (CMR) has become a robust and important diagnostic imaging modality in cardiovascular medicine. However,insufficient image quality may compromise its diagnostic accuracy. No standardized criteria are available to assess the quality of CMR studies. We aimed todescribe and validate standardized criteria to evaluate the quality of CMR studies including: a) cine steady-state free precession, b) delayed gadoliniumenhancement, and c) adenosine stress first-pass perfusion. These criteria will serve for the assessment of the image quality in the setting of the Euro-CMR registry.METHOD AND MATERIALS: First, a total of 45 quality criteria were defined (35 qualitative criteria with a score from 0-3, and 10 quantitative criteria). Thequalitative score ranged from 0 to 105. The lower the qualitative score, the better the quality. The quantitative criteria were based on the absolute signal intensity (delayed enhancement) and on the signal increase (perfusion) of the anterior/posterior left ventricular wall after gadolinium injection. These criteria were then applied in 30 patients scanned with a 1.5T system and in 15 patients scanned with a 3.0T system. The examinations were jointly interpreted by 3 CMR experts and 1 study nurse. In these 45 patients the correlation between the results of the quality assessment obtained by the different readers was calculated.RESULTS: On the 1.5T machine, the mean quality score was 3.5. The mean difference between each pair of observers was 0.2 (5.7%) with a mean standarddeviation of 1.4. On the 3.0T machine, the mean quality score was 4.4. The mean difference between each pair of onservers was 0.3 (6.4%) with a meanstandard deviation of 1.6. The quantitative quality assessments between observers were well correlated for the 1.5T machine: R was between 0.78 and 0.99 (pCONCLUSION: The described criteria for the assessment of CMR image quality are robust and have a low inter-observer variability, especially on 1.5T systems.CLINICAL RELEVANCE/APPLICATION: These criteria will allow the standardization of CMR examinations. They will help to improve the overall quality ofexaminations and the comparison between clinical studies.
Resumo:
BACKGROUND: Cardiovascular magnetic resonance (CMR) has become an important diagnostic imaging modality in cardiovascular medicine. However, insufficient image quality may compromise its diagnostic accuracy. We aimed to describe and validate standardized criteria to evaluate a) cine steady-state free precession (SSFP), b) late gadolinium enhancement (LGE), and c) stress first-pass perfusion images. These criteria will serve for quality assessment in the setting of the Euro-CMR registry. METHODS: Thirty-five qualitative criteria were defined (scores 0-3) with lower scores indicating better image quality. In addition, quantitative parameters were measured yielding 2 additional quality criteria, i.e. signal-to-noise ratio (SNR) of non-infarcted myocardium (as a measure of correct signal nulling of healthy myocardium) for LGE and % signal increase during contrast medium first-pass for perfusion images. These qualitative and quantitative criteria were assessed in a total of 90 patients (60 patients scanned at our own institution at 1.5T (n=30) and 3T (n=30) and in 30 patients randomly chosen from the Euro-CMR registry examined at 1.5T). Analyses were performed by 2 SCMR level-3 experts, 1 trained study nurse, and 1 trained medical student. RESULTS: The global quality score was 6.7±4.6 (n=90, mean of 4 observers, maximum possible score 64), range 6.4-6.9 (p=0.76 between observers). It ranged from 4.0-4.3 for 1.5T (p=0.96 between observers), from 5.9-6.9 for 3T (p=0.33 between observers), and from 8.6-10.3 for the Euro-CMR cases (p=0.40 between observers). The inter- (n=4) and intra-observer (n=2) agreement for the global quality score, i.e. the percentage of assignments to the same quality tertile ranged from 80% to 88% and from 90% to 98%, respectively. The agreement for the quantitative assessment for LGE images (scores 0-2 for SNR <2, 2-5, >5, respectively) ranged from 78-84% for the entire population, and 70-93% at 1.5T, 64-88% at 3T, and 72-90% for the Euro-CMR cases. The agreement for perfusion images (scores 0-2 for %SI increase >200%, 100%-200%,<100%, respectively) ranged from 81-91% for the entire population, and 76-100% at 1.5T, 67-96% at 3T, and 62-90% for the Euro-CMR registry cases. The intra-class correlation coefficient for the global quality score was 0.83. CONCLUSIONS: The described criteria for the assessment of CMR image quality are robust with a good inter- and intra-observer agreement. Further research is needed to define the impact of image quality on the diagnostic and prognostic yield of CMR studies.
Resumo:
Currently, the most widely used criteria for assessing response to therapy in high-grade gliomas are based on two-dimensional tumor measurements on computed tomography (CT) or magnetic resonance imaging (MRI), in conjunction with clinical assessment and corticosteroid dose (the Macdonald Criteria). It is increasingly apparent that there are significant limitations to these criteria, which only address the contrast-enhancing component of the tumor. For example, chemoradiotherapy for newly diagnosed glioblastomas results in transient increase in tumor enhancement (pseudoprogression) in 20% to 30% of patients, which is difficult to differentiate from true tumor progression. Antiangiogenic agents produce high radiographic response rates, as defined by a rapid decrease in contrast enhancement on CT/MRI that occurs within days of initiation of treatment and that is partly a result of reduced vascular permeability to contrast agents rather than a true antitumor effect. In addition, a subset of patients treated with antiangiogenic agents develop tumor recurrence characterized by an increase in the nonenhancing component depicted on T2-weighted/fluid-attenuated inversion recovery sequences. The recognition that contrast enhancement is nonspecific and may not always be a true surrogate of tumor response and the need to account for the nonenhancing component of the tumor mandate that new criteria be developed and validated to permit accurate assessment of the efficacy of novel therapies. The Response Assessment in Neuro-Oncology Working Group is an international effort to develop new standardized response criteria for clinical trials in brain tumors. In this proposal, we present the recommendations for updated response criteria for high-grade gliomas.
Resumo:
Purpose: Dynamic high-field magnetic resonance (MR) defecography including the evacuation phase is a promising tool for the assessment of functional pelvic disorders, nowadays seen with increasing frequency in elderly women in particular. Learning objectives: 1. To describe the adequate technique of dynamic high-field MRI (3T) in assessing pelvic floor disorders. 2. To provide an overview of the most common pathologies occurring during the evacuation phase, especially in comparison with results of conventional defecography. Methods and materials: After description of the ideal technical parameters of MR defecography performed in supine position after gel rectal filling with a 3 Tesla unit and including the evacuation phase we stress the importance of using a standardized evaluation system for the exact assessment of pelvic floor pathophysiology. Results: The typical pelvic floor disorders occurring before and/or during the evacuation phase, such as sphincter insufficiency, vaginal vault and/or uterine prolapse, cystourethrocele, peritoneo-/ entero-/ sigmoïdocele or rectal prolapse, are demonstrated. The difference between the terms "pelvic floor descent" and "pelvic floor relaxation" are pictorially outlined. MR results are compared with these of conventional defecography. Conclusion: Exact knowledge about the correct technique including the evacuation phase and the use of a standardized evaluation system in assessing pelvic floor disorders by dynamic high-field MRI is mandatory for accurate and reproducible diagnosis.
Resumo:
During the past twenty years, various instruments have been developed for the assessment of substance use in adolescents, mainly in the United States. However, few of them have been adapted to, and validated in, French-speaking populations. Consequently, although increasing alcohol and drug use among teenagers has become a major concern, the various health and social programs developed in response to this specific problem have received little attention with regard to follow-up and outcome assessment. A standardized multidimensional assessment instrument adapted for adolescents is needed to assess the individual needs of adolescents and assign them to the most appropriate treatment setting, to provide a single measurement within and across health and social systems, and to conduct treatment outcome evaluations. Moreover, having an available instrument makes it possible to develop longitudinal and transcultural research studies. For this reason, a French version of the Adolescent Drug Abuse Diagnosis (ADAD) was developed and validated at the University Child and Adolescent Psychiatric Clinic in Lausanne, Switzerland. This article aims to discuss the methodological issues that we faced when using the ADAD instrument in a 4-year longitudinal study including adolescent substance users. Methodological aspects relating to the content and format of the instrument, the assessment administration and the statistical analyses are discussed.
Resumo:
Cognitive impairment has been identified in the early phase of schizophrenia spectrum disorders, and is a major contributor to disease-related disability. While screening tools assessing cognitive impairment have been validated for adult schizophrenic populations, there is a need for brief, easily administered, standardized instruments that provide clinically relevant information for adolescents. This study examines the utility of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) in identifying and quantifying neurocognitive impairment in adolescents with schizophrenia spectrum disorders and other serious psychiatric illnesses. 112 adolescents, including 32 healthy subjects and 80 patients, were administered the RBANS. Patients with psychotic disorders demonstrated significant impairment on the RBANS total score compared to patients with other disorders and healthy controls, but this impairment appeared somewhat less severe than is typically reported for in adult patients with schizophrenia on this measure. The RBANS appears to be sensitive in the detection of neurocognitive impairment in a psychiatric population of adolescents with psychotic symptomatology, and may therefore have utility as a clinical screening instrument and/or neurocognitive outcome measure in this population.
Resumo:
During the past twenty years, various instruments have been developed for the assessment of substance use in adolescents, mainly in the United States. However, few of them have been adapted to, and validated in, French-speaking populations. Consequently, although increasing alcohol and drug use among teenagers has become a major concern, the various health and social programs developed in response to this specific problem have received little attention with regard to follow-up and outcome assessment. A standardized multidimensional assessment instrument adapted for adolescents is needed to assess the individual needs of adolescents and assign them to the most appropriate treatment setting, to provide a single measurement within and across health and social systems, and to conduct treatment outcome evaluations. Moreover, having an available instrument makes it possible to develop longitudinal and trans-cultural research studies. For this reason, a French version of the Adolescent Drug Abuse Diagnosis (ADAD) was developed and validated at the University Child and Adolescent Psychiatric Clinic in Lausanne, Switzerland. This paper aims to discuss the methodological issues that we faced when using the ADAD instrument in a 4-year longitudinal study including adolescent substance users. Methodological aspects relating to the content and format of the instrument, the assessment administration and the statistical analyses are discussed.
Oral cancer treatments and adherence: medication event monitoring system assessment for capecitabine
Resumo:
Background: Oncological treatments are traditionally administered via intravenous injection by qualified personnel. Oral formulas which are developing rapidly are preferred by patients and facilitate administration however they may increase non-adherence. In this study 4 common oral chemotherapeutics are given to 50 patients, who are still in the process of inclusion, divided into 4 groups. The aim is to evaluate adherence and offer these patients interdisciplinary support with the joint help of doctors and pharmacists. We present here the results for capecitabine. Materials and Methods: The final goal is to evaluate adhesion in 50 patients split into 4 groups according to oral treatments (letrozole/exemestane, imatinib/sunitinib, capecitabine and temozolomide) using persistence and quality of execution as parameters. These parameters are evaluated using a medication event monitoring system (MEMS®) in addition to routine oncological visits and semi-structured interviews. Patients were monitored for the entire duration of treatment up to a maximum of 1 year. Patient satisfaction was assessed at the end of the monitoring period using a standardized questionary. Results: Capecitabine group included 2 women and 8 men with a median age of 55 years (range: 36−77 years) monitored for an average duration of 100 days (range: 5-210 days). Persistence was 98% and quality of execution 95%. 5 patients underwent cyclic treatment (2 out of 3 weeks) and 5 patients continuous treatment. Toxicities higher than grade 1 were grade 2−3 hand-foot syndrome in 1 patient and grade 3 acute coronary syndrome in 1 patient both without impact on adherence. Patients were satisfied with the interviews undergone during the study (57% useful, 28% very useful, 15% useless) and successfully integrated the MEMS® in their daily lives (57% very easily, 43% easily) according to the results obtained by questionary at the end of the monitoring period. Conclusion: Persistence and quality of execution observed in our Capecitabine group of patients were excellent and better than expected compared to previously published studies. The interdisciplinary approach allowed us to better identify and help patients with toxicities to maintain adherence. Overall patients were satisfied with the global interdisciplinary follow-up. With longer follow up better evaluation of our method and its impact will be possible. Interpretation of the results of patients in the other groups of this ongoing trial will provide us information for a more detailed analysis.
Resumo:
BACKGROUND: Diffusion-weighted magnetic resonance imaging (MRI) is increasingly being used for assessing the treatment succes in oncology, but the real clinical value needs to evaluated by comparison with other, already established, metabolic imaging techniques. PURPOSE: To prospectively evaluate the clinical potential of diffusion-weighted MRI with apparent diffusion coefficient (ADC) mapping for gastrointestinal stromal tumor (GIST) response to targeted therapy compared with 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). MATERIAL AND METHODS: Eight patients (mean age, 56 ± 11 years) known to have metastatic GIST underwent 18F-FDG PET/CT and MRI (T1Gd, DWI [b = 50,300,600], ADC mapping) simultaneously, before and after change in targeted therapy. MR and PET/CT examinations were first analyzed blindly. Second, PET/CT images were co-registered with T1Gd-MR images for lesion detection. Only 18F-FDG avid lesions were considered. Maximum standardized uptake value (SUVmax) and the corresponding minimum ADCmin were measured for the six largest lesions per patient, if any, on baseline and follow-up examinations. The relationship between changes in SUVmax and ADCmin was analyzed (Spearman's correlation). RESULTS: Twenty-four metastases (12 hepatic, 12 extra-hepatic) were compared on PET/CT and MR images. SUVmax decreased from 7.7 ± 8.1 g/mL to 5.5 ± 5.4 g/mL (P = 0.20), while ADCmin increased from 1.2 ± 0.3 × 10(-3)mm(2)/s to 1.5 ± 0.3 × 10(-3)mm(2)/s (P = 0.0002). There was a significant association between changes in SUVmax and ADCmin (rho = - 0.62, P = 0.0014), but not between changes in lesions size (P = 0.40). CONCLUSION: Changes in ADCmin correlated with the response of 18F-FDG avid GIST to targeted therapy. Thus, diffusion-weighted MRI may represent a radiation-free alternative for follow-up treatment for metastatic GIST patients.
Resumo:
The activity of eosinophilic esophagitis (EoE) can be assessed with patient-reported outcomes and biologic measures. Patient-reported outcomes include symptoms and quality of life, whereas biologic measures refer to endoscopic, histologic, and biochemical activity (e.g. blood biomarkers). So far, a validated tool to assess EoE activity in the above-mentioned dimensions is lacking. Given the lack of a standardized way to assess EoE activity in the various dimensions, the results of different clinical trials may be difficult to compare. For symptom assessment in adult patients, the symptom 'dysphagia' should be evaluated according to different standardized food consistencies. Furthermore, symptom assessment should take into account the following items: avoidance of specific food categories, food modification, and time to eat a regular meal. A distinct symptom recall period (e.g. 2 weeks) has to be defined for symptom assessment. Performing an 'esophageal stress test' with ingestion of a standardized meal to measure symptom severity bears the potential risk of acute food bolus impaction and should therefore be avoided. The description of endoscopic findings in EoE has meanwhile been standardized. Histologic evaluation of EoE activity should report either the size of the high-power field used or count the eosinophils per mm(2). There is a current lack of blood biomarkers demonstrating a good correlation with histologic activity in esophageal biopsies. The development and validation of an adult and pediatric EoE activity index is urgently needed not only for clinical trials and observational studies, but also for daily practice.
Resumo:
BACKGROUND: Protein-energy malnutrition is highly prevalent in aged populations. Associated clinical, economic, and social burden is important. A valid screening method that would be robust and precise, but also easy, simple, and rapid to apply, is essential for adequate therapeutic management. OBJECTIVES: To compare the interobserver variability of 2 methods measuring food intake: semiquantitative visual estimations made by nurses versus calorie measurements performed by dieticians on the basis of standardized color digital photographs of servings before and after consumption. DESIGN: Observational monocentric pilot study. SETTING/PARTICIPANTS: A geriatric ward. The meals were randomly chosen from the meal tray. The choice was anonymous with respect to the patients who consumed them. MEASUREMENTS: The test method consisted of the estimation of calorie consumption by dieticians on the basis of standardized color digital photographs of servings before and after consumption. The reference method was based on direct visual estimations of the meals by nurses. Food intake was expressed in the form of a percentage of the serving consumed and calorie intake was then calculated by a dietician based on these percentages. The methods were applied with no previous training of the observers. Analysis of variance was performed to compare their interobserver variability. RESULTS: Of 15 meals consumed and initially examined, 6 were assessed with each method. Servings not consumed at all (0% consumption) or entirely consumed by the patient (100% consumption) were not included in the analysis so as to avoid systematic error. The digital photography method showed higher interobserver variability in calorie intake estimations. The difference between the compared methods was statistically significant (P < .03). CONCLUSIONS: Calorie intake measures for geriatric patients are more concordant when estimated in a semiquantitative way. Digital photography for food intake estimation without previous specific training of dieticians should not be considered as a reference method in geriatric settings, as it shows no advantages in terms of interobserver variability.