36 resultados para Rapid assessment methods
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Monitoring pathology/regeneration in experimental models of de-/remyelination requires an accurate measure not only of functional changes but also of the amount of myelin. We tested whether X-ray diffraction (XRD), which measures periodicity in unfixed myelin, can assess the structural integrity of myelin in fixed tissue. From laboratories involved in spinal cord injury research and in studying the aging primate brain, we solicited "blind" samples and used an electronic detector to record rapidly the diffraction patterns (30 min each pattern) from them. We assessed myelin integrity by measuring its periodicity and relative amount. Fixation of tissue itself introduced +/-10% variation in periodicity and +/-40% variation in relative amount of myelin. For samples having the most native-like periods, the relative amounts of myelin detected allowed distinctions to be made between normal and demyelinating segments, between motor and sensory tracts within the spinal cord, and between aged and young primate CNS. Different periodicities also allowed distinctions to be made between samples from spinal cord and nerve roots and between well-fixed and poorly fixed samples. Our findings suggest that, in addition to evaluating the effectiveness of different fixatives, XRD could also be used as a robust and rapid technique for quantitating the relative amount of myelin among spinal cords and other CNS tissue samples from experimental models of de- and remyelination.
Resumo:
Purpose The accuracy, efficiency, and efficacy of four commonly recommended medication safety assessment methodologies were systematically reviewed. Methods Medical literature databases were systematically searched for any comparative study conducted between January 2000 and October 2009 in which at least two of the four methodologies—incident report review, direct observation, chart review, and trigger tool—were compared with one another. Any study that compared two or more methodologies for quantitative accuracy (adequacy of the assessment of medication errors and adverse drug events) efficiency (effort and cost), and efficacy and that provided numerical data was included in the analysis. Results Twenty-eight studies were included in this review. Of these, 22 compared two of the methodologies, and 6 compared three methods. Direct observation identified the greatest number of reports of drug-related problems (DRPs), while incident report review identified the fewest. However, incident report review generally showed a higher specificity compared to the other methods and most effectively captured severe DRPs. In contrast, the sensitivity of incident report review was lower when compared with trigger tool. While trigger tool was the least labor-intensive of the four methodologies, incident report review appeared to be the least expensive, but only when linked with concomitant automated reporting systems and targeted follow-up. Conclusion All four medication safety assessment techniques—incident report review, chart review, direct observation, and trigger tool—have different strengths and weaknesses. Overlap between different methods in identifying DRPs is minimal. While trigger tool appeared to be the most effective and labor-efficient method, incident report review best identified high-severity DRPs.
Resumo:
Rapid bedside determination of cerebral blood pressure autoregulation (AR) may improve clinical utility. We tested the hypothesis that cerebral Hb oxygenation (HbDiff) and cerebral Hb volume (HbTotal) measured by near-infrared spectroscopy (NIRS) would correlate with cerebral blood flow (CBF) after single dose phenylephrine (PE). Critically ill patients requiring artificial ventilation and arterial lines were eligible. During rapid blood pressure rise induced by i.v. PE bolus, ΔHbDiff and ΔHbTotal were calculated by subtracting values at baseline (normotension) from values at peak blood pressure elevation (hypertension). With the aid of NIRS and bolus injection of indocyanine green, relative measures of CBF, called blood flow index (BFI), were determined during normotension and during hypertension. BFI during hypertension was expressed as percentage from BFI during normotension (BFI%). Autoregulation indices (ARIs) were calculated by dividing BFI%, ΔHbDiff, and ΔHbTotal by the concomitant change in blood pressure. In 24 patients (11 newborns and 13 children), significant correlations between BFI% and ΔHbDiff (or ΔHbTotal) were found. In addition, the associations between Hb-based ARI and BFI%-based ARI were significant with correlation coefficients of 0.73 (or 0.72). Rapid determination of dynamic AR with the aid of cerebral Hb signals and PE bolus seems to be reliable.
Resumo:
Introduction. In this era of high-tech medicine, it is becoming increasingly important to assess patient satisfaction. There are several methods to do so, but these differ greatly in terms of cost, time, and labour and external validity. The aim of this study is to describe and compare the structure and implementation of different methods to assess the satisfaction of patients in an emergency department. Methods. The structure and implementation of the different methods to assess patient satisfaction were evaluated on the basis of a 90-minute standardised interview. Results. We identified a total of six different methods in six different hospitals. The average number of patients assessed was 5012, with a range from 230 (M5) to 20 000 patients (M2). In four methods (M1, M3, M5, and M6), the questionnaire was composed by a specialised external institute. In two methods, the questionnaire was created by the hospital itself (M2, M4).The median response rate was 58.4% (range 9-97.8%). With a reminder, the response rate increased by 60% (M3). Conclusion. The ideal method to assess patient satisfaction in the emergency department setting is to use a patient-based, in-emergency department-based assessment of patient satisfaction, planned and guided by expert personnel.
Resumo:
OBJECTIVES To evaluate the diagnostic performance of seven non-invasive tests (NITs) of liver fibrosis and to assess fibrosis progression over time in HIV/HCV co-infected patients. METHODS Transient elastography (TE) and six blood tests were compared to histopathological fibrosis stage (METAVIR). Participants were followed over three years with NITs at yearly intervals. RESULTS Area under the receiver operating characteristic curve (AUROC) for significant fibrosis (> = F2) in 105 participants was highest for TE (0.85), followed by FIB-4 (0.77), ELF-Test (0.77), APRI (0.76), Fibrotest (0.75), hyaluronic acid (0.70), and Hepascore (0.68). AUROC for cirrhosis (F4) was 0.97 for TE followed by FIB-4 (0.91), APRI (0.89), Fibrotest (0.84), Hepascore (0.82), ELF-Test (0.82), and hyaluronic acid (0.79). A three year follow-up was completed by 87 participants, all on antiretroviral therapy and in 20 patients who completed HCV treatment (9 with sustained virologic response). TE, APRI and Fibrotest did not significantly change during follow-up. There was weak evidence for an increase of FIB-4 (mean increase: 0.22, p = 0.07). 42 participants had a second liver biopsy: Among 38 participants with F0-F3 at baseline, 10 were progessors (1-stage increase in fibrosis, 8 participants; 2-stage, 1; 3-stage, 1). Among progressors, mean increase in TE was 3.35 kPa, in APRI 0.36, and in FIB-4 0.75. Fibrotest results did not change over 3 years. CONCLUSION TE was the best NIT for liver fibrosis staging in HIV/HCV co-infected patients. APRI-Score, FIB-4 Index, Fibrotest, and ELF-Test were less reliable. Routinely available APRI and FIB-4 performed as good as more expensive tests. NITs did not change significantly during a follow-up of three years, suggesting slow liver disease progression in a majority of HIV/HCV co-infected persons on antiretroviral therapy.
Variability of anti-PF4/heparin antibody results obtained by the rapid testing system ID-H/PF4-PaGIA
Resumo:
BACKGROUND: Recent studies have shown that a low clinical pretest probability may be adequate for excluding heparin-induced thrombocytopenia. However, for patients with intermediate or high pretest probability, laboratory testing is essential for confirming or refuting the diagnosis. Rapid assessment of anti-PF4/heparin-antibodies may assist clinical decision-making. OBJECTIVES: To evaluate the performance of rapid ID-H/PF4-PaGIA. In particular, we verified reproducibility of results between plasma and serum specimens, between fresh and frozen samples, and between different ID-H/PF4-polymer lots (polystyrene beads coated with heparin/PF4-complexes). PATIENTS/METHODS: The samples studied were 1376 plasma and 914 corresponding serum samples from patients investigated for suspected heparin-induced thrombocytopenia between January 2000 and October 2008. Anti-PF4/heparin-antibodies were assessed by ID-H/PF4-PaGIA, commercially available ELISAs and heparin-induced platelet aggregation test. RESULTS: Among 914 paired plasma/serum samples we noted discordant results (negative vs. low-titre positive) in nine instances (1%; 95%CI, 0.4-1.6%). Overall, agreement between titres assessed in plasma vs. serum was highly significant (Spearman correlation coefficient, 0.975; P < 0.0001). Forty-seven samples tested both fresh and after freezing/thawing showed a good agreement, with one discordant positive/negative result (Spearman correlation coefficient, 0.970; P < 0.0001). Among 1376 plasma samples we noted a strikingly variable incidence of false negative results (none - 82%; 95%CI, 66-98%), depending on the employed ID-H/PF4-polymer lot. Faulty lots can be recognized by titrating commercial positive controls and stored samples of HIT-patients. CONCLUSION: Laboratories performing the assay should implement stringent internal quality controls in order to recognize potentially faulty ID-H/PF4-polymer lots, thus avoiding false negative results.
Resumo:
PURPOSE: Computer-based feedback systems for assessing the quality of cardiopulmonary resuscitation (CPR) are widely used these days. Recordings usually involve compression and ventilation dependent variables. Thorax compression depth, sufficient decompression and correct hand position are displayed but interpreted independently of one another. We aimed to generate a parameter, which represents all the combined relevant parameters of compression to provide a rapid assessment of the quality of chest compression-the effective compression ratio (ECR). METHODS: The following parameters were used to determine the ECR: compression depth, correct hand position, correct decompression and the proportion of time used for chest compressions compared to the total time spent on CPR. Based on the ERC guidelines, we calculated that guideline compliant CPR (30:2) has a minimum ECR of 0.79. To calculate the ECR, we expanded the previously described software solution. In order to demonstrate the usefulness of the new ECR-parameter, we first performed a PubMed search for studies that included correct compression and no-flow time, after which we calculated the new parameter, the ECR. RESULTS: The PubMed search revealed 9 trials. Calculated ECR values ranged between 0.03 (for basic life support [BLS] study, two helpers, no feedback) and 0.67 (BLS with feedback from the 6th minute). CONCLUSION: ECR enables rapid, meaningful assessment of CPR and simplifies the comparability of studies as well as the individual performance of trainees. The structure of the software solution allows it to be easily adapted to any manikin, CPR feedback devices and different resuscitation guidelines (e.g. ILCOR, ERC).
Resumo:
AIMS: Myocardial blood flow (MBF) is the gold standard to assess myocardial blood supply and, as recently shown, can be obtained by myocardial contrast echocardiography (MCE). The aims of this human study are (i) to test whether measurements of collateral-derived MBF by MCE are feasible during elective angioplasty and (ii) to validate the concept of pressure-derived collateral-flow assessment. METHODS AND RESULTS: Thirty patients with stable coronary artery disease underwent MCE of the collateral-receiving territory during and after angioplasty of 37 stenoses. MCE perfusion analysis was successful in 32 cases. MBF during and after angioplasty varied between 0.060-0.876 mL min(-1) g(-1) (0.304+/-0.196 mL min(-1) g(-1)) and 0.676-1.773 mL min(-1) g(-1) (1.207+/-0.327 mL min(-1) g(-1)), respectively. Collateral-perfusion index (CPI) is defined as the rate of MBF during and after angioplasty varied between 0.05 and 0.67 (0.26+/-0.15). During angioplasty, simultaneous measurements of mean aortic pressure, coronary wedge pressure, and central venous pressure determined the pressure-derived collateral-flow index (CFI(p)), which varied between 0.04 and 0.61 (0.23+/-0.14). Linear-regression analysis demonstrated an excellent agreement between CFI(p) and CPI (y=0.88 x +0.01; r(2)=0.92; P<0.0001). CONCLUSION: Collateral-derived MBF measurements by MCE during angioplasty are feasible and proved that the pressure-derived CFI exactly reflects collateral relative to normal myocardial perfusion in humans.
Resumo:
This paper constitutes a summary of the consensus documents agreed at the First European Workshop on Implant Dentistry University Education held in Prague on 19-22 June 2008. Implant dentistry is becoming increasingly important treatment alternative for the restoration of missing teeth, as patients expectations and demands increase. Furthermore, implant related complications such as peri-implantitis are presenting more frequently in the dental surgery. This consensus paper recommends that implant dentistry should be an integral part of the undergraduate curriculum. Whilst few schools will achieve student competence in the surgical placement of implants this should not preclude the inclusion of the fundamental principles of implant dentistry in the undergraduate curriculum such as the evidence base for their use, indications and contraindications and treatment of the complications that may arise. The consensus paper sets out the rationale for the introduction of implant dentistry in the dental curriculum and the knowledge base for an undergraduate programme in the subject. It lists the competencies that might be sought without expectations of surgical placement of implants at this stage and the assessment methods that might be employed. The paper also addresses the competencies and educational pathways for postgraduate education in implant dentistry.
Resumo:
Soil conservation technologies that fit well to local scale and are acceptable to land users are increasingly needed. To achieve this at small-holder farm level, there is a need for an understanding of specific erosion processes and indicators, the land users’ knowledge and their willingness, ability and possibilities to respond to the respective problems to decide on control options. This study was carried out to assess local erosion and performance of earlier introduced conservation terraces from both technological and land users’ points of view. The study was conducted during July to August 2008 at Angereb watershed on 58 farm plots from three selected case-study catchments. Participatory erosion assessment and evaluation were implemented along with direct field measurement procedures. Our focus was to involve the land users in the action research to explore with them the effectiveness of existing conservation measures against the erosion hazard. Terrace characteristics measured and evaluated against the terrace implementation guideline of Hurni (1986). The long-term consequences of seasonal erosion indicators had often not been known and noticed by farmers. The cause and effect relationships of the erosion indicators and conservation measures have shown the limitations and gaps to be addressed towards sustainable erosion control strategies. Less effective erosion control has been observed and participants have believed the gaps are to be the result of lack of landusers’ genuine participation. The results of both local erosion observation and assessment of conservation efficacy using different aspects show the need to promote approaches for erosion evaluation and planning of interventions by the farmers themselves. This paper describes the importance of human factor involving in the empirical erosion assessment methods towards sustainable soil conservation.
Resumo:
PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.
Resumo:
The aim of this study was to determine the reliability of the conditioned pain modulation (CPM) paradigm assessed by an objective electrophysiological method, the nociceptive withdrawal reflex (NWR), and psychophysical measures, using hypothetical sample sizes for future studies as analytical goals. Thirty-four healthy volunteers participated in two identical experimental sessions, separated by 1 to 3 weeks. In each session, the cold pressor test (CPT) was used to induce CPM, and the NWR thresholds, electrical pain detection thresholds and pain intensity ratings after suprathreshold electrical stimulation were assessed before and during CPT. CPM was consistently detected by all methods, and the electrophysiological measures did not introduce additional variation to the assessment. In particular, 99% of the trials resulted in higher NWR thresholds during CPT, with an average increase of 3.4 mA (p<0.001). Similarly, 96% of the trials resulted in higher electrical pain detection thresholds during CPT, with an average increase of 2.2 mA (p<0.001). Pain intensity ratings after suprathreshold electrical stimulation were reduced during CPT in 84% of the trials, displaying an average decrease of 1.5 points in a numeric rating scale (p<0.001). Under these experimental conditions, CPM reliability was acceptable for all assessment methods in terms of sample sizes for potential experiments. The presented results are encouraging with regards to the use of the CPM as an assessment tool in experimental and clinical pain. Trial registration: Clinical Trials.gov NCT01636440.
Resumo:
Motor retardation is a common symptom of major depressive disorder (MDD). Despite the existence of various assessment methods, little is known on the pathobiology of motor retardation. We aimed to elucidate aspects of motor control investigating the association of objective motor activity and resting state cerebral blood flow (CBF).