107 resultados para International Society of Christian Endeavor.
Resumo:
This document summarizes the available evidence and provides recommendations on the use of home blood pressure monitoring in clinical practice and in research. It updates the previous recommendations on the same topic issued in year 2000. The main topics addressed include the methodology of home blood pressure monitoring, its diagnostic and therapeutic thresholds, its clinical applications in hypertension, with specific reference to special populations, and its applications in research. The final section deals with the problems related to the implementation of these recommendations in clinical practice.
Resumo:
FRAX(®) is a fracture risk assessment algorithm developed by the World Health Organization in cooperation with other medical organizations and societies. Using easily available clinical information and femoral neck bone mineral density (BMD) measured by dual-energy X-ray absorptiometry (DXA), when available, FRAX(®) is used to predict the 10-year probability of hip fracture and major osteoporotic fracture. These values may be included in country specific guidelines to aid clinicians in determining when fracture risk is sufficiently high that the patient is likely to benefit from pharmacological therapy to reduce that risk. Since the introduction of FRAX(®) into clinical practice, many practical clinical questions have arisen regarding its use. To address such questions, the International Society for Clinical Densitometry (ISCD) and International Osteoporosis Foundations (IOF) assigned task forces to review the best available medical evidence and make recommendations for optimal use of FRAX(®) in clinical practice. Questions were identified and divided into three general categories. A task force was assigned to investigating the medical evidence in each category and developing clinically useful recommendations. The BMD Task Force addressed issues that included the potential use of skeletal sites other than the femoral neck, the use of technologies other than DXA, and the deletion or addition of clinical data for FRAX(®) input. The evidence and recommendations were presented to a panel of experts at the ISCD-IOF FRAX(®) Position Development Conference, resulting in the development of ISCD-IOF Official Positions addressing FRAX(®)-related issues.
Resumo:
Rheumatoid arthritis is the only secondary cause of osteoporosis that is considered independent of bone density in the FRAX(®) algorithm. Although input for rheumatoid arthritis in FRAX(®) is a dichotomous variable, intuitively, one would expect that more severe or active disease would be associated with a greater risk for fracture. We reviewed the literature to determine if specific disease parameters or medication use could be used to better characterize fracture risk in individuals with rheumatoid arthritis. Although many studies document a correlation between various parameters of disease activity or severity and decreased bone density, fewer have associated these variables with fracture risk. We reviewed these studies in detail and concluded that disability measures such as HAQ (Health Assessment Questionnaire) and functional class do correlate with clinical fractures but not morphometric vertebral fractures. One large study found a strong correlation with duration of disease and fracture risk but additional studies are needed to confirm this. There was little evidence to correlate other measures of disease such as DAS (disease activity score), VAS (visual analogue scale), acute phase reactants, use of non-glucocorticoid medications and increased fracture risk. We concluded that FRAX(®) calculations may underestimate fracture probability in patients with impaired functional status from rheumatoid arthritis but that this could not be quantified at this time. At this time, other disease measures cannot be used for fracture prediction. However only a few, mostly small studies addressed other disease parameters and further research is needed. Additional questions for future research are suggested.
Resumo:
The best indirect evidence that increased bone turnover contributes to fracture risk is the fact that most of the proven therapies for osteoporosis are inhibitors of bone turnover. The evidence base that we can use biochemical markers of bone turnover in the assessment of fracture risk is somewhat less convincing. This relates to natural variability in the markers, problems with the assays, disparity in the statistical analyses of relevant studies and the independence of their contribution to fracture risk. More research is clearly required to address these deficiencies before biochemical markers might contribute a useful independent risk factor for inclusion in FRAX(®).
Resumo:
Risk factors for fracture can be purely skeletal, e.g., bone mass, microarchitecture or geometry, or a combination of bone and falls risk related factors such as age and functional status. The remit of this Task Force was to review the evidence and consider if falls should be incorporated into the FRAX® model or, alternatively, to provide guidance to assist clinicians in clinical decision-making for patients with a falls history. It is clear that falls are a risk factor for fracture. Fracture probability may be underestimated by FRAX® in individuals with a history of frequent falls. The substantial evidence that various interventions are effective in reducing falls risk was reviewed. Targeting falls risk reduction strategies towards frail older people at high risk for indoor falls is appropriate. This Task Force believes that further fracture reduction requires measures to reduce falls risk in addition to bone directed therapy. Clinicians should recognize that patients with frequent falls are at higher fracture risk than currently estimated by FRAX® and include this in decision-making. However, quantitative adjustment of the FRAX® estimated risk based on falls history is not currently possible. In the long term, incorporation of falls as a risk factor in the FRAX® model would be ideal.
Resumo:
The 2010 Position Development Conference addressed four questions related to the impact of previous fractures on 10-year fracture risk as calculated by FRAX(®). To address these questions, PubMed was searched on the keywords "fracture, epidemiology, osteoporosis." Titles of retrieved articles were reviewed for an indication that risk for future fracture was discussed. Abstracts of these articles were reviewed for an indication that one or more of the questions listed above was discussed. For those that did, the articles were reviewed in greater detail to extract the findings and to find additional past work and citing works that also bore on the questions. The official positions and the supporting literature review are presented here. FRAX(®) underestimates fracture probability in persons with a history of multiple fractures (good, A, W). FRAX(®) may underestimate fracture probability in individuals with prevalent severe vertebral fractures (good, A, W). While there is evidence that hip, vertebral, and humeral fractures appear to confer greater risk of subsequent fracture than fractures at other sites, quantification of this incremental risk in FRAX(®) is not possible (fair, B, W). FRAX(®) may underestimate fracture probability in individuals with a parental history of non-hip fragility fracture (fair, B, W). Limitations of the methodology include performance by a single reviewer, preliminary review of the literature being confined to titles, and secondary review being limited to abstracts. Limitations of the evidence base include publication bias, overrepresentation of persons of European descent in the published studies, and technical differences in the methods used to identify prevalent and incident fractures. Emerging topics for future research include fracture epidemiology in non-European populations and men, the impact of fractures in family members other than parents, and the genetic contribution to fracture risk.
Resumo:
Tools to predict fracture risk are useful for selecting patients for pharmacological therapy in order to reduce fracture risk and redirect limited healthcare resources to those who are most likely to benefit. FRAX® is a World Health Organization fracture risk assessment algorithm for estimating the 10-year probability of hip fracture and major osteoporotic fracture. Effective application of FRAX® in clinical practice requires a thorough understanding of its limitations as well as its utility. For some patients, FRAX® may underestimate or overestimate fracture risk. In order to address some of the common issues encountered with the use of FRAX® for individual patients, the International Society for Clinical Densitometry (ISCD) and International Osteoporosis Foundation (IOF) assigned task forces to review the medical evidence and make recommendations for optimal use of FRAX® in clinical practice. Among the issues addressed were the use of bone mineral density (BMD) measurements at skeletal sites other than the femoral neck, the use of technologies other than dual-energy X-ray absorptiometry, the use of FRAX® without BMD input, the use of FRAX® to monitor treatment, and the addition of the rate of bone loss as a clinical risk factor for FRAX®. The evidence and recommendations were presented to a panel of experts at the Joint ISCD-IOF FRAX® Position Development Conference, resulting in the development of Joint ISCD-IOF Official Positions addressing FRAX®-related issues.
Resumo:
Dual-energy X-ray absorptiometry (DXA) is the most widely used technical instrument for evaluating bone mineral content (BMC) and density (BMD) in patients of all ages. However, its use in pediatric patients, during growth and development, poses a much more complex problem in terms of both the technical aspects and the interpretation of the results. For the adults population, there is a well-defined term of reference: the peak value of BMD attained by young healthy subjects at the end of skeletal growth. During childhood and adolescence, the comparison can be made only with healthy subjects of the same age, sex and ethnicity, but the situation is compounded by the wide individual variation in the process of skeletal growth (pubertal development, hormone action, body size and bone size). The International Society for Clinical Densitometry (ISCD) organized a Pediatric Position Development Conference to discuss the specific problems of bone densitometry in growing subjects (9-19 years of age) and to provide essential recommendations for its clinical use.
Resumo:
A variety of technologies have been developed to assist decision-making during the management of patients with acute brain injury who require intensive care. A large body of research has been generated describing these various technologies. The Neurocritical Care Society (NCS) in collaboration with the European Society of Intensive Care Medicine (ESICM), the Society for Critical Care Medicine (SCCM), and the Latin America Brain Injury Consortium (LABIC) organized an international, multidisciplinary consensus conference to perform a systematic review of the published literature to help develop evidence-based practice recommendations on bedside physiologic monitoring. This supplement contains a Consensus Summary Statement with recommendations and individual topic reviews on physiologic processes important in the care of acute brain injury. In this article we provide the evidentiary tables for select topics including systemic hemodynamics, intracranial pressure, brain and systemic oxygenation, EEG, brain metabolism, biomarkers, processes of care and monitoring in emerging economies to provide the clinician ready access to evidence that supports recommendations about neuromonitoring.
Resumo:
Careful patient monitoring using a variety of techniques including clinical and laboratory evaluation, bedside physiological monitoring with continuous or non-continuous techniques and imaging is fundamental to the care of patients who require neurocritical care. How best to perform and use bedside monitoring is still being elucidated. To create a basic platform for care and a foundation for further research the Neurocritical Care Society in collaboration with the European Society of Intensive Care Medicine, the Society for Critical Care Medicine and the Latin America Brain Injury Consortium organized an international, multidisciplinary consensus conference to develop recommendations about physiologic bedside monitoring. This supplement contains a Consensus Summary Statement with recommendations and individual topic reviews as a background to the recommendations. In this article, we highlight the recommendations and provide additional conclusions as an aid to the reader and to facilitate bedside care.
Resumo:
Molecular monitoring of BCR/ABL transcripts by real time quantitative reverse transcription PCR (qRT-PCR) is an essential technique for clinical management of patients with BCR/ABL-positive CML and ALL. Though quantitative BCR/ABL assays are performed in hundreds of laboratories worldwide, results among these laboratories cannot be reliably compared due to heterogeneity in test methods, data analysis, reporting, and lack of quantitative standards. Recent efforts towards standardization have been limited in scope. Aliquots of RNA were sent to clinical test centers worldwide in order to evaluate methods and reporting for e1a2, b2a2, and b3a2 transcript levels using their own qRT-PCR assays. Total RNA was isolated from tissue culture cells that expressed each of the different BCR/ABL transcripts. Serial log dilutions were prepared, ranging from 100 to 10-5, in RNA isolated from HL60 cells. Laboratories performed 5 independent qRT-PCR reactions for each sample type at each dilution. In addition, 15 qRT-PCR reactions of the 10-3 b3a2 RNA dilution were run to assess reproducibility within and between laboratories. Participants were asked to run the samples following their standard protocols and to report cycle threshold (Ct), quantitative values for BCR/ABL and housekeeping genes, and ratios of BCR/ABL to housekeeping genes for each sample RNA. Thirty-seven (n=37) participants have submitted qRT-PCR results for analysis (36, 37, and 34 labs generated data for b2a2, b3a2, and e1a2, respectively). The limit of detection for this study was defined as the lowest dilution that a Ct value could be detected for all 5 replicates. For b2a2, 15, 16, 4, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. For b3a2, 20, 13, and 4 labs showed a limit of detection at the 10-5, 10-4, and 10-3 dilutions, respectively. For e1a2, 10, 21, 2, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. Log %BCR/ABL ratio values provided a method for comparing results between the different laboratories for each BCR/ABL dilution series. Linear regression analysis revealed concordance among the majority of participant data over the 10-1 to 10-4 dilutions. The overall slope values showed comparable results among the majority of b2a2 (mean=0.939; median=0.9627; range (0.399 - 1.1872)), b3a2 (mean=0.925; median=0.922; range (0.625 - 1.140)), and e1a2 (mean=0.897; median=0.909; range (0.5174 - 1.138)) laboratory results (Fig. 1-3)). Thirty-four (n=34) out of the 37 laboratories reported Ct values for all 15 replicates and only those with a complete data set were included in the inter-lab calculations. Eleven laboratories either did not report their copy number data or used other reporting units such as nanograms or cell numbers; therefore, only 26 laboratories were included in the overall analysis of copy numbers. The median copy number was 348.4, with a range from 15.6 to 547,000 copies (approximately a 4.5 log difference); the median intra-lab %CV was 19.2% with a range from 4.2% to 82.6%. While our international performance evaluation using serially diluted RNA samples has reinforced the fact that heterogeneity exists among clinical laboratories, it has also demonstrated that performance within a laboratory is overall very consistent. Accordingly, the availability of defined BCR/ABL RNAs may facilitate the validation of all phases of quantitative BCR/ABL analysis and may be extremely useful as a tool for monitoring assay performance. Ongoing analyses of these materials, along with the development of additional control materials, may solidify consensus around their application in routine laboratory testing and possible integration in worldwide efforts to standardize quantitative BCR/ABL testing.
Resumo:
Purpose: In this prospective randomized study efficacy and safety of two immunosuppressive regimens (Tac, MMF, Steroids vs. CsA, MMF, Steroids) after Lung Transplantation were compared. Primary objective was the incidence of bronchiolitis obliterans syndrome (BOS). Secondary objectives were incidence of acute rejection and infection, survival and adverse events. 248 patients with a complete 3 year follow-up were included in the analysis. Methods and Materials: Patients were randomized to treatment group A: Tac (0.01-0.03 mg/kg/d iv-0.05-0.3 mg/kg/d po) or B: CsA (1-3 mg/kg/d iv-2-8 mg/kg/d po). MMF dose was1-4 mg/d in both groups. No induction therapy was given. Patients were stratified for cystic fibrosis. Intention to treat analysis was performed in patients who were switched to a different immunosuppressive regimen. Results: 3 of 123 Tac patients and 41 of 125 CsA patients were switched to another immunosuppressive regimen and were analyzed as intention to treat. Three year follow-up data of the complete patient cohort were included in this final analysis. Groups showed no difference in demographic data. Kaplan Meier analysis revealed significantly less BOS in Tac treated patients (p=0.033, log rank test, pooled over strata). Cox regression showed a twice as high risk for BOS in the CsA group (factor 2.003). Incidence of acute rejection was 67.5% (Tac) and 75.2% (CsA) (p=0.583). One- and 3-year-survival-rates were not different (85.4% Tac vs. 88.8% CsA, and 80.5% Tac vs. 83.2% CsA, p=n.s.). Incidence of infections and renal failure was similar (p=n.s.). Conclusions: Tac significantly reduced the risk for BOS after 3 years in this intention to treat analysis. Both regimens have a good immunosuppressive potential and offer a similar safety profile with excellent one and three year survival rates. Acute rejection rates were similar in both groups. Incidence of infections and renal failure showed no difference.
Resumo:
Molecular monitoring of BCR/ABL transcripts by real time quantitative reverse transcription PCR (qRT-PCR) is an essential technique for clinical management of patients with BCR/ABL-positive CML and ALL. Though quantitative BCR/ABL assays are performed in hundreds of laboratories worldwide, results among these laboratories cannot be reliably compared due to heterogeneity in test methods, data analysis, reporting, and lack of quantitative standards. Recent efforts towards standardization have been limited in scope. Aliquots of RNA were sent to clinical test centers worldwide in order to evaluate methods and reporting for e1a2, b2a2, and b3a2 transcript levels using their own qRT-PCR assays. Total RNA was isolated from tissue culture cells that expressed each of the different BCR/ABL transcripts. Serial log dilutions were prepared, ranging from 100 to 10-5, in RNA isolated from HL60 cells. Laboratories performed 5 independent qRT-PCR reactions for each sample type at each dilution. In addition, 15 qRT-PCR reactions of the 10-3 b3a2 RNA dilution were run to assess reproducibility within and between laboratories. Participants were asked to run the samples following their standard protocols and to report cycle threshold (Ct), quantitative values for BCR/ABL and housekeeping genes, and ratios of BCR/ABL to housekeeping genes for each sample RNA. Thirty-seven (n=37) participants have submitted qRT-PCR results for analysis (36, 37, and 34 labs generated data for b2a2, b3a2, and e1a2, respectively). The limit of detection for this study was defined as the lowest dilution that a Ct value could be detected for all 5 replicates. For b2a2, 15, 16, 4, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. For b3a2, 20, 13, and 4 labs showed a limit of detection at the 10-5, 10-4, and 10-3 dilutions, respectively. For e1a2, 10, 21, 2, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. Log %BCR/ABL ratio values provided a method for comparing results between the different laboratories for each BCR/ABL dilution series. Linear regression analysis revealed concordance among the majority of participant data over the 10-1 to 10-4 dilutions. The overall slope values showed comparable results among the majority of b2a2 (mean=0.939; median=0.9627; range (0.399 - 1.1872)), b3a2 (mean=0.925; median=0.922; range (0.625 - 1.140)), and e1a2 (mean=0.897; median=0.909; range (0.5174 - 1.138)) laboratory results (Fig. 1-3)). Thirty-four (n=34) out of the 37 laboratories reported Ct values for all 15 replicates and only those with a complete data set were included in the inter-lab calculations. Eleven laboratories either did not report their copy number data or used other reporting units such as nanograms or cell numbers; therefore, only 26 laboratories were included in the overall analysis of copy numbers. The median copy number was 348.4, with a range from 15.6 to 547,000 copies (approximately a 4.5 log difference); the median intra-lab %CV was 19.2% with a range from 4.2% to 82.6%. While our international performance evaluation using serially diluted RNA samples has reinforced the fact that heterogeneity exists among clinical laboratories, it has also demonstrated that performance within a laboratory is overall very consistent. Accordingly, the availability of defined BCR/ABL RNAs may facilitate the validation of all phases of quantitative BCR/ABL analysis and may be extremely useful as a tool for monitoring assay performance. Ongoing analyses of these materials, along with the development of additional control materials, may solidify consensus around their application in routine laboratory testing and possible integration in worldwide efforts to standardize quantitative BCR/ABL testing.
Resumo:
OBJECTIVE: To review and update the conceptual framework, indicator content and research priorities of the Organisation for Economic Cooperation and Development's (OECD) Health Care Quality Indicators (HCQI) project, after a decade of collaborative work. DESIGN: A structured assessment was carried out using a modified Delphi approach, followed by a consensus meeting, to assess the suite of HCQI for international comparisons, agree on revisions to the original framework and set priorities for research and development. SETTING: International group of countries participating to OECD projects. PARTICIPANTS: Members of the OECD HCQI expert group. RESULTS: A reference matrix, based on a revised performance framework, was used to map and assess all seventy HCQI routinely calculated by the OECD expert group. A total of 21 indicators were agreed to be excluded, due to the following concerns: (i) relevance, (ii) international comparability, particularly where heterogeneous coding practices might induce bias, (iii) feasibility, when the number of countries able to report was limited and the added value did not justify sustained effort and (iv) actionability, for indicators that were unlikely to improve on the basis of targeted policy interventions. CONCLUSIONS: The revised OECD framework for HCQI represents a new milestone of a long-standing international collaboration among a group of countries committed to building common ground for performance measurement. The expert group believes that the continuation of this work is paramount to provide decision makers with a validated toolbox to directly act on quality improvement strategies.
Resumo:
The World Health Organization (WHO) plans to submit the 11th revision of the International Classification of Diseases (ICD) to the World Health Assembly in 2018. The WHO is working toward a revised classification system that has an enhanced ability to capture health concepts in a manner that reflects current scientific evidence and that is compatible with contemporary information systems. In this paper, we present recommendations made to the WHO by the ICD revision's Quality and Safety Topic Advisory Group (Q&S TAG) for a new conceptual approach to capturing healthcare-related harms and injuries in ICD-coded data. The Q&S TAG has grouped causes of healthcare-related harm and injuries into four categories that relate to the source of the event: (a) medications and substances, (b) procedures, (c) devices and (d) other aspects of care. Under the proposed multiple coding approach, one of these sources of harm must be coded as part of a cluster of three codes to depict, respectively, a healthcare activity as a 'source' of harm, a 'mode or mechanism' of harm and a consequence of the event summarized by these codes (i.e. injury or harm). Use of this framework depends on the implementation of a new and potentially powerful code-clustering mechanism in ICD-11. This new framework for coding healthcare-related harm has great potential to improve the clinical detail of adverse event descriptions, and the overall quality of coded health data.