148 resultados para test protocol
em Université de Lausanne, Switzerland
Resumo:
ABSTRACT: BACKGROUND: There is no recommendation to screen ferritin level in blood donors, even though several studies have noted the high prevalence of iron deficiency after blood donation, particularly among menstruating females. Furthermore, some clinical trials have shown that non-anaemic women with unexplained fatigue may benefit from iron supplementation. Our objective is to determine the clinical effect of iron supplementation on fatigue in female blood donors without anaemia, but with a mean serum ferritin </= 30 ng/ml. METHODS/DESIGN: In a double blind randomised controlled trial, we will measure blood count and ferritin level of women under age 50 yr, who donate blood to the University Hospital of Lausanne Blood Transfusion Department, at the time of the donation and after 1 week. One hundred and forty donors with a ferritin level </= 30 ng/ml and haemoglobin level >/= 120 g/l (non-anaemic) a week after the donation will be included in the study and randomised. A one-month course of oral ferrous sulphate (80 mg/day of elemental iron) will be introduced vs. placebo. Self-reported fatigue will be measured using a visual analogue scale. Secondary outcomes are: score of fatigue (Fatigue Severity Scale), maximal aerobic power (Chester Step Test), quality of life (SF-12), and mood disorders (Prime-MD). Haemoglobin and ferritin concentration will be monitored before and after the intervention. DISCUSSION: Iron deficiency is a potential problem for all blood donors, especially menstruating women. To our knowledge, no other intervention study has yet evaluated the impact of iron supplementation on subjective symptoms after a blood donation. TRIAL REGISTRATION: NCT00689793.
Resumo:
Ventricular assist devices (VADs) are used in treatment for terminal heart failure or as a bridge to transplantation. We created biVAD using the artificial muscles (AMs) that supports both ventricles at the same time. We developed the test bench (TB) as the in vitro evaluating system to enable the measurement of performance. The biVAD exerts different pressure between left and right ventricle like the heart physiologically does. The heart model based on child's heart was constructed in silicone. This model was fitted with the biVAD. Two pipettes containing water with an ultrasonic sensor placed on top of each and attached to ventricles reproduced the preload and the after load of each ventricle by the real-time measurement of the fluid height variation proportionally to the exerted pressure. The LabVIEW software extrapolated the displaced volume and the pressure generated by each side of our biVAD. The development of a standardized protocol permitted the validation of the TB for in vitro evaluation, measurement of the performances of the AM biVAD herein, and reproducibility of data.
Resumo:
BACKGROUND AND STUDY AIMS: The current gold standard in Barrett's esophagus monitoring consists of four-quadrant biopsies every 1-2 cm in accordance with the Seattle protocol. Adding brush cytology processed by digital image cytometry (DICM) may further increase the detection of patients with Barrett's esophagus who are at risk of neoplasia. The aim of the present study was to assess the additional diagnostic value and accuracy of DICM when added to the standard histological analysis in a cross-sectional multicenter study of patients with Barrett's esophagus in Switzerland. METHODS: One hundred sixty-four patients with Barrett's esophagus underwent 239 endoscopies with biopsy and brush cytology. DICM was carried out on 239 cytology specimens. Measures of the test accuracy of DICM (relative risk, sensitivity, specificity, likelihood ratios) were obtained by dichotomizing the histopathology results (high-grade dysplasia or adenocarcinoma vs. all others) and DICM results (aneuploidy/intermediate pattern vs. diploidy). RESULTS: DICM revealed diploidy in 83% of 239 endoscopies, an intermediate pattern in 8.8%, and aneuploidy in 8.4%. An intermediate DICM result carried a relative risk (RR) of 12 and aneuploidy a RR of 27 for high-grade dysplasia/adenocarcinoma. Adding DICM to the standard biopsy protocol, a pathological cytometry result (aneuploid or intermediate) was found in 25 of 239 endoscopies (11%; 18 patients) with low-risk histology (no high-grade dysplasia or adenocarcinoma). During follow-up of 14 of these 18 patients, histological deterioration was seen in 3 (21%). CONCLUSION: DICM from brush cytology may add important information to a standard biopsy protocol by identifying a subgroup of BE-patients with high-risk cellular abnormalities.
Resumo:
BACKGROUND: Newborn screening (NBS) for Cystic Fibrosis (CF) has been introduced in many countries, but there is no ideal protocol suitable for all countries. This retrospective study was conducted to evaluate whether the planned two step CF NBS with immunoreactive trypsinogen (IRT) and 7 CFTR mutations would have detected all clinically diagnosed children with CF in Switzerland. METHODS: IRT was measured using AutoDELFIA Neonatal IRT-Kit in stored NBS cards. RESULTS: Between 2006 and 2009, 66 children with CF were reported, 4 of which were excluded for various reasons (born in another country, NBS at 6 months, no informed consent). 98% (61/62) had significantly higher IRT compared to matched control group. There was one false negative IRT result in an asymptomatic child with atypical CF (normal pancreatic function and sweat test). CONCLUSIONS: All children but one with atypical CF would have been detected with the planned two step protocol.
Resumo:
PURPOSE: The current study tested the applicability of Jessor's problem behavior theory (PBT) in national probability samples from Georgia and Switzerland. Comparisons focused on (1) the applicability of the problem behavior syndrome (PBS) in both developmental contexts, and (2) on the applicability of employing a set of theory-driven risk and protective factors in the prediction of problem behaviors. METHODS: School-based questionnaire data were collected from n = 18,239 adolescents in Georgia (n = 9499) and Switzerland (n = 8740) following the same protocol. Participants rated five measures of problem behaviors (alcohol and drug use, problems because of alcohol and drug use, and deviance), three risk factors (future uncertainty, depression, and stress), and three protective factors (family, peer, and school attachment). Final study samples included n = 9043 Georgian youth (mean age = 15.57; 58.8% females) and n = 8348 Swiss youth (mean age = 17.95; 48.5% females). Data analyses were completed using structural equation modeling, path analyses, and post hoc z-tests for comparisons of regression coefficients. RESULTS: Findings indicated that the PBS replicated in both samples, and that theory-driven risk and protective factors accounted for 13% and 10% in Georgian and Swiss samples, respectively in the PBS, net the effects by demographic variables. Follow-up z-tests provided evidence of some differences in the magnitude, but not direction, in five of six individual paths by country. CONCLUSION: PBT and the PBS find empirical support in these Eurasian and Western European samples; thus, Jessor's theory holds value and promise in understanding the etiology of adolescent problem behaviors outside of the United States.
Resumo:
Introduction: One of the main goals for exereise testing in children is evaluation of exercise capacity. There are many testing protocols, but the Bruce treadmill protocol is widely used among pediatrie cardiology centers. Thirty years ago, Cuming et al. were the first to establish normal values for children from North America (Canada) aged 4 to 18 years old. No data was ever published for children from Western Europe. Our study aimed to assess the validity of the normal values from Cuming et al. for children from Western Europe in the 21 st century. Methods: It is a retrospective cohort study in a tertiary care children's hospital. 144 children referred to our institution but finally diagnosed as having a normal heart underwent exercise stress testing using the Bruce protocol between 1999 and 2006. Data from 59 girls and 85 boys aged 6 to 18 were reviewed. Mean endurance time (ET) for each age category and gender was compared with the mean normal values fram Cumming et al by an unpaired t-test. Results: Mean ET increases with age until 15 years old in girls and then decreases. Mean endurance time increases continuouslY'from 6 to 18 years old in boys. The increase is more pronounced in boys than girls. In our study, a significant higher mean ET was found for boys in age categories 10 to 12, 13 to 15 and 16 to 18. No significant difference was found in any other groups. Conclusions: Some normal values from Cuming et al. established in 1978 for ET with the Bruce protocol are probably not appropriate any more today for children from Western Europe. Our study showed that mean ET is higher for boys from 10 to 18 years old. Despite common beliefs, cardiovascular conditioning doesn't seem yet reduced in children from Western Europe. New data for Bruce treadmill exercise. testing for healthy children, 4 to 18 years old, living in Western Europe are required. .
Resumo:
Background: Cardiac magnetic resonance (CMR) is accepted as a method to assess suspected coronary artery disease (CAD). Nonetheless, invasive coronary angiography (CXA) combined or not with fractional flow reserve (FFR) remains the main diagnostic test to evaluate CAD. Little data exist on the economic impact of the use of these procedures in a population with a low to intermediate pre-test probability. Objective: To compare the costs of 3 decision strategies to revascularize a patient with suspected CAD: 1) strategy guided by CMR 2) hypothetical strategy guided by CXA-FFR, 3) hypothetical strategy guided by CXA alone.
Resumo:
Buchheit, M, Al Haddad, H, Millet GP, Lepretre, PM, Newton, M, and Ahmaidi, S. Cardiorespiratory and cardiac autonomic responses to 30-15 Intermittent Fitness Test in team sport players. J Strength Cond Res 23(1): xxx-xxx, 2009-The 30-15 Intermittent Fitness Test (30-15IFT) is an attractive alternative to classic continuous incremental field tests for defining a reference velocity for interval training prescription in team sport athletes. The aim of the present study was to compare cardiorespiratory and autonomic responses to 30-15IFT with those observed during a standard continuous test (CT). In 20 team sport players (20.9 +/- 2.2 years), cardiopulmonary parameters were measured during exercise and for 10 minutes after both tests. Final running velocity, peak lactate ([La]peak), and rating of perceived exertion (RPE) were also measured. Parasympathetic function was assessed during the postexercise recovery phase via heart rate (HR) recovery time constant (HRRtau) and HR variability (HRV) vagal-related indices. At exhaustion, no difference was observed in peak oxygen uptake (&OV0312;o2peak), respiratory exchange ratio, HR, or RPE between 30-15IFT and CT. In contrast, 30-15IFT led to significantly higher minute ventilation, [La]peak, and final velocity than CT (p < 0.05 for all parameters). All maximal cardiorespiratory variables observed during both tests were moderately to well correlated (e.g., r = 0.76, p = 0.001 for &OV0312;o2peak). Regarding ventilatory thresholds (VThs), all cardiorespiratory measurements were similar and well correlated between the 2 tests. Parasympathetic function was lower after 30-15IFT than after CT, as indicated by significantly longer HHRtau (81.9 +/- 18.2 vs. 60.5 +/- 19.5 for 30-15IFT and CT, respectively, p < 0.001) and lower HRV vagal-related indices (i.e., the root mean square of successive R-R intervals differences [rMSSD]: 4.1 +/- 2.4 and 7.0 +/- 4.9 milliseconds, p < 0.05). In conclusion, the 30-15IFT is accurate for assessing VThs and &OV0312;o2peak, but it alters postexercise parasympathetic function more than a continuous incremental protocol.
Resumo:
BACKGROUND: The diagnosis of Pulmonary Embolism (PE) in the emergency department (ED) is crucial. As emergency physicians fear missing this potential life-threatening condition, PE tends to be over-investigated, exposing patients to unnecessary risks and uncertain benefit in terms of outcome. The Pulmonary Embolism Rule-out Criteria (PERC) is an eight-item block of clinical criteria that can identify patients who can safely be discharged from the ED without further investigation for PE. The endorsement of this rule could markedly reduce the number of irradiative imaging studies, ED length of stay, and rate of adverse events resulting from both diagnostic and therapeutic interventions. Several retrospective and prospective studies have shown the safety and benefits of the PERC rule for PE diagnosis in low-risk patients, but the validity of this rule is still controversial. We hypothesize that in European patients with a low gestalt clinical probability and who are PERC-negative, PE can be safely ruled out and the patient discharged without further testing. METHODS/DESIGN: This is a controlled, cluster randomized trial, in 15 centers in France. Each center will be randomized for the sequence of intervention periods: a 6-month intervention period (PERC-based strategy) followed by a 6-month control period (usual care), or in reverse order, with 2 months of "wash-out" between the 2 periods. Adult patients presenting to the ED with a suspicion of PE and a low pre test probability estimated by clinical gestalt will be eligible. The primary outcome is the percentage of failure resulting from the diagnostic strategy, defined as diagnosed venous thromboembolic events at 3-month follow-up, among patients for whom PE has been initially ruled out. DISCUSSION: The PERC rule has the potential to decrease the number of irradiative imaging studies in the ED, and is reported to be safe. However, no randomized study has ever validated the safety of PERC. Furthermore, some studies have challenged the safety of a PERC-based strategy to rule-out PE, especially in Europe where the prevalence of PE diagnosed in the ED is high. The PROPER study should provide high-quality evidence to settle this issue. If it confirms the safety of the PERC rule, physicians will be able to reduce the number of investigations, associated subsequent adverse events, costs, and ED length of stay for patients with a low clinical probability of PE. TRIAL REGISTRATION: NCT02375919 .
Resumo:
BACKGROUND: Enhanced recovery after surgery (ERAS) programmes have been shown to decrease complications and hospital stay. The cost-effectiveness of such programmes has been demonstrated for colorectal surgery. This study aimed to assess the economic outcomes of a standard ERAS programme for pancreaticoduodenectomy. METHODS: ERAS for pancreaticoduodenectomy was implemented in October 2012. All consecutive patients who underwent pancreaticoduodenectomy until October 2014 were recorded. This group was compared in terms of costs with a cohort of consecutive patients who underwent pancreaticoduodenectomy between January 2010 and October 2012, before ERAS implementation. Preoperative, intraoperative and postoperative real costs were collected for each patient via the hospital administration. A bootstrap independent t test was used for comparison. ERAS-specific costs were integrated into the model. RESULTS: The groups were well matched in terms of demographic and surgical details. The overall complication rate was 68 per cent (50 of 74 patients) and 82 per cent (71 of 87 patients) in the ERAS and pre-ERAS groups respectively (P = 0·046). Median hospital stay was lower in the ERAS group (15 versus 19 days; P = 0·029). ERAS-specific costs were euro922 per patient. Mean total costs were euro56 083 per patient in the ERAS group and euro63 821 per patient in the pre-ERAS group (P = 0·273). The mean intensive care unit (ICU) and intermediate care costs were euro9139 and euro13 793 per patient for the ERAS and pre-ERAS groups respectively (P = 0·151). CONCLUSION: ERAS implementation for pancreaticoduodenectomy did not increase the costs in this cohort. Savings were noted in anaesthesia/operating room, medication and laboratory costs. Fewer patients in the ERAS group required an ICU stay.
Resumo:
The objective of this work was to develop an easily applicable technique and a standardized protocol for high-quality post-mortem angiography. This protocol should (1) increase the radiological interpretation by decreasing artifacts due to the perfusion and by reaching a complete filling of the vascular system and (2) ease and standardize the execution of the examination. To this aim, 45 human corpses were investigated by post-mortem computed tomography (CT) angiography using different perfusion protocols, a modified heart-lung machine and a new contrast agent mixture, specifically developed for post-mortem investigations. The quality of the CT angiographies was evaluated radiologically by observing the filling of the vascular system and assessing the interpretability of the resulting images and by comparing radiological diagnoses to conventional autopsy conclusions. Post-mortem angiography yielded satisfactory results provided that the volumes of the injected contrast agent mixture were high enough to completely fill the vascular system. In order to avoid artifacts due to the post-mortem perfusion, a minimum of three angiographic phases and one native scan had to be performed. These findings were taken into account to develop a protocol for quality post-mortem CT angiography that minimizes the risk of radiological misinterpretation. The proposed protocol is easy applicable in a standardized way and yields high-quality radiologically interpretable visualization of the vascular system in post-mortem investigations.
Resumo:
Purpose: To investigate the accuracy of 4 clinical instruments in the detection of glaucomatous damage. Methods: 102 eyes of 55 test subjects (Age mean = 66.5yrs, range = [39; 89]) underwent Heidelberg Retinal Tomography (HRTIII), (disc area<2.43); and standard automated perimetry (SAP) using Octopus (Dynamic); Pulsar (TOP); and Moorfields Motion Displacement Test (MDT) (ESTA strategy). Eyes were separated into three groups 1) Healthy (H): IOP<21mmHg and healthy discs (clinical examination), 39 subjects, 78 eyes; 2) Glaucoma suspect (GS): Suspicious discs (clinical examination), 12 subjects, 15 eyes; 3) Glaucoma (G): progressive structural or functional loss, 14 subjects, 20 eyes. Clinical diagnostic precision was examined using the cut-off associated with the p<5% normative limit of MD (Octopus/Pulsar), PTD (MDT) and MRA (HRT) analysis. The sensitivity, specificity and accuracy were calculated for each instrument. Results: See table Conclusions: Despite the advantage of defining glaucoma suspects using clinical optic disc examination, the HRT did not yield significantly higher accuracy than functional measures. HRT, MDT and Octopus SAP yielded higher accuracy than Pulsar perimetry, although results did not reach statistical significance. Further studies are required to investigate the structure-function correlations between these instruments.
Resumo:
Unraveling the effect of selection vs. drift on the evolution of quantitative traits is commonly achieved by one of two methods. Either one contrasts population differentiation estimates for genetic markers and quantitative traits (the Q(st)-F(st) contrast) or multivariate methods are used to study the covariance between sets of traits. In particular, many studies have focused on the genetic variance-covariance matrix (the G matrix). However, both drift and selection can cause changes in G. To understand their joint effects, we recently combined the two methods into a single test (accompanying article by Martin et al.), which we apply here to a network of 16 natural populations of the freshwater snail Galba truncatula. Using this new neutrality test, extended to hierarchical population structures, we studied the multivariate equivalent of the Q(st)-F(st) contrast for several life-history traits of G. truncatula. We found strong evidence of selection acting on multivariate phenotypes. Selection was homogeneous among populations within each habitat and heterogeneous between habitats. We found that the G matrices were relatively stable within each habitat, with proportionality between the among-populations (D) and the within-populations (G) covariance matrices. The effect of habitat heterogeneity is to break this proportionality because of selection for habitat-dependent optima. Individual-based simulations mimicking our empirical system confirmed that these patterns are expected under the selective regime inferred. We show that homogenizing selection can mimic some effect of drift on the G matrix (G and D almost proportional), but that incorporating information from molecular markers (multivariate Q(st)-F(st)) allows disentangling the two effects.
Resumo:
Over-resuscitation is deleterious in many critically ill conditions, including major burns. For more than 15 years, several strategies to reduce fluid administration in burns during the initial resuscitation phase have been proposed, but no single or simple parameter has shown superiority. Fluid administration guided by invasive hemodynamic parameters usually resulted in over-resuscitation. As reported in the previous issue of Critical Care, Sánchez-Sánchez and colleagues analyzed the performance of a 'permissive hypovolemia' protocol guided by invasive hemodynamic parameters (PiCCO, Pulsion Medical Systems, Munich, Germany) and vital signs in a prospective cohort over a 3-year period. The authors' results confirm that resuscitation can be achieved with below-normal levels of preload but at the price of a fluid administration greater than predicted by the Parkland formula (2 to 4 mL/kg per% burn). The classic approach based on an adapted Parkland equation may still be the simplest until further studies identify the optimal bundle of resuscitation goals.