933 resultados para Sequential organ failure assessment score
Resumo:
The Environmental Health (EH) program of Peace Corps (PC) Panama and a non-governmental organization (NGO) Waterlines have been assisting rural communities in Panama gain access to improved water sources through the practice of community management (CM) model and participatory development. Unfortunately, there is little information available on how a water system is functioning once the construction is complete and the volunteer leaves the community. This is a concern when the recent literature suggests that most communities are not able to indefinitely maintain a rural water system (RWS) without some form of external assistance (Sara and Katz, 1997; Newman et al, 2002; Lockwood, 2002, 2003, 2004; IRC, 2003; Schweitzer, 2009). Recognizing this concern, the EH program director encouraged the author to complete a postproject assessment of the past EH water projects. In order to carry out the investigation, an easy to use monitoring and evaluation tool was developed based on literature review and the author’s three years of field experience in rural Panama. The study methodology consists of benchmark scoring systems to rate the following ten indicators: watershed, source capture, transmission line, storage tank, distribution system, system reliability, willingness to pay, accounting/transparency, maintenance, and active water committee members. The assessment of 28 communities across the country revealed that the current state of physical infrastructure, as well as the financial, managerial and technical capabilities of water committees varied significantly depending on the community. While some communities are enjoying continued service and their water committee completing all of its responsibilities, others have seen their water systems fall apart and be abandoned. Overall, the higher score were more prevalent for all ten indicators. However, even the communities with the highest scores requested some form of additional assistance. The conclusion from the assessment suggests that the EH program should incorporate an institutional support mechanism (ISM) to its sector policy in order to systematically provide follow-up support to rural communities in Panama. A full-time circuit rider with flexible funding would be able to provide additional technical support, training and encouragement to those communities in need.
Resumo:
BACKGROUND: This study investigated the role of a negative FAST in the diagnostic and therapeutic algorithm of multiply injured patients with liver or splenic lesions. METHODS: A retrospective analysis of 226 multiply injured patients with liver or splenic lesions treated at Bern University Hospital, Switzerland. RESULTS: FAST failed to detect free fluid or organ lesions in 45 of 226 patients with spleen or liver injuries (sensitivity 80.1%). Overall specificity was 99.5%. The positive and negative predictive values were 99.4% and 83.3%. The overall likelihood ratios for a positive and negative FAST were 160.2 and 0.2. Grade III-V organ lesions were detected more frequently than grade I and II lesions. Without the additional diagnostic accuracy of a CT scan, the mean ISS of the FAST-false-negative patients would be significantly underestimated and 7 previously unsuspected intra-abdominal injuries would have been missed. CONCLUSION: FAST is an expedient tool for the primary assessment of polytraumatized patients to rule out high grade intra-abdominal injuries. However, the low overall diagnostic sensitivity of FAST may lead to underestimated injury patterns and delayed complications may occur. Hence, in hemodynamically stable patients with abdominal trauma, an early CT scan should be considered and one must be aware of the potential shortcomings of a "negative FAST".
Resumo:
Studies are suggesting that hurricane hazard patterns (e.g. intensity and frequency) may change as a consequence of the changing global climate. As hurricane patterns change, it can be expected that hurricane damage risks and costs may change as a result. This indicates the necessity to develop hurricane risk assessment models that are capable of accounting for changing hurricane hazard patterns, and develop hurricane mitigation and climatic adaptation strategies. This thesis proposes a comprehensive hurricane risk assessment and mitigation strategies that account for a changing global climate and that has the ability of being adapted to various types of infrastructure including residential buildings and power distribution poles. The framework includes hurricane wind field models, hurricane surge height models and hurricane vulnerability models to estimate damage risks due to hurricane wind speed, hurricane frequency, and hurricane-induced storm surge and accounts for the timedependant properties of these parameters as a result of climate change. The research then implements median insured house values, discount rates, housing inventory, etc. to estimate hurricane damage costs to residential construction. The framework was also adapted to timber distribution poles to assess the impacts climate change may have on timber distribution pole failure. This research finds that climate change may have a significant impact on the hurricane damage risks and damage costs of residential construction and timber distribution poles. In an effort to reduce damage costs, this research develops mitigation/adaptation strategies for residential construction and timber distribution poles. The costeffectiveness of these adaptation/mitigation strategies are evaluated through the use of a Life-Cycle Cost (LCC) analysis. In addition, a scenario-based analysis of mitigation strategies for timber distribution poles is included. For both residential construction and timber distribution poles, adaptation/mitigation measures were found to reduce damage costs. Finally, the research develops the Coastal Community Social Vulnerability Index (CCSVI) to include the social vulnerability of a region to hurricane hazards within this hurricane risk assessment. This index quantifies the social vulnerability of a region, by combining various social characteristics of a region with time-dependant parameters of hurricanes (i.e. hurricane wind and hurricane-induced storm surge). Climate change was found to have an impact on the CCSVI (i.e. climate change may have an impact on the social vulnerability of hurricane-prone regions).
Resumo:
In the Dominican Republic economic growth in the past twenty years has not yielded sufficient improvement in access to drinking water services, especially in rural areas where 1.5 million people do not have access to an improved water source (WHO, 2006). Worldwide, strategic development planning in the rural water sector has focused on participatory processes and the use of demand filters to ensure that service levels match community commitment to post-project operation and maintenance. However studies have concluded that an alarmingly high percentage of drinking water systems (20-50%) do not provide service at the design levels and/or fail altogether (up to 90%): BNWP (2009), Annis (2006), and Reents (2003). World Bank, USAID, NGOs, and private consultants have invested significant resources in an effort to determine what components make up an “enabling environment” for sustainable community management of rural water systems (RWS). Research has identified an array of critical factors, internal and external to the community, which affect long term sustainability of water services. Different frameworks have been proposed in order to better understand the linkages between individual factors and sustainability of service. This research proposes a Sustainability Analysis Tool to evaluate the sustainability of RWS, adapted from previous relevant work in the field to reflect the realities in the Dominican Republic. It can be used as a diagnostic tool for government entities and development organizations to characterize the needs of specific communities and identify weaknesses in existing training regimes or support mechanisms. The framework utilizes eight indicators in three categories (Organization/Management, Financial Administration, and Technical Service). Nineteen independent variables are measured resulting in a score of sustainability likely (SL), possible (SP), or unlikely (SU) for each of the eight indicators. Thresholds are based upon benchmarks from the DR and around the world, primary data collected during the research, and the author’s 32 months of field experience. A final sustainability score is calculated using weighting factors for each indicator, derived from Lockwood (2003). The framework was tested using a statistically representative geographically stratified random sample of 61 water systems built in the DR by initiatives of the National Institute of Potable Water (INAPA) and Peace Corps. The results concluded that 23% of sample systems are likely to be sustainable in the long term, 59% are possibly sustainable, and for 18% it is unlikely that the community will be able to overcome any significant challenge. Communities that were scored as unlikely sustainable perform poorly in participation, financial durability, and governance while the highest scores were for system function and repair service. The Sustainability Analysis Tool results are verified by INAPA and PC reports, evaluations, and database information, as well as, field observations and primary data collected during the surveys. Future research will analyze the nature and magnitude of relationships between key factors and the sustainability score defined by the tool. Factors include: gender participation, legal status of water committees, plumber/operator remuneration, demand responsiveness, post construction support methodologies, and project design criteria.
Resumo:
INTRODUCTION: Sedative and analgesic drugs are frequently used in critically ill patients. Their overuse may prolong mechanical ventilation and length of stay in the intensive care unit. Guidelines recommend use of sedation protocols that include sedation scores and trials of sedation cessation to minimize drug use. We evaluated processed electroencephalography (response and state entropy and bispectral index) as an adjunct to monitoring effects of commonly used sedative and analgesic drugs and intratracheal suctioning. METHODS: Electrodes for monitoring bispectral index and entropy were placed on the foreheads of 44 critically ill patients requiring mechanical ventilation and who previously had no brain dysfunction. Sedation was targeted individually using the Ramsay Sedation Scale, recorded every 2 hours or more frequently. Use of and indications for sedative and analgesic drugs and intratracheal suctioning were recorded manually and using a camera. At the end of the study, processed electroencephalographical and haemodynamic variables collected before and after each drug application and tracheal suctioning were analyzed. Ramsay score was used for comparison with processed electroencephalography when assessed within 15 minutes of an intervention. RESULTS: The indications for boli of sedative drugs exhibited statistically significant, albeit clinically irrelevant, differences in terms of their association with processed electroencephalographical parameters. Electroencephalographical variables decreased significantly after bolus, but a specific pattern in electroencephalographical variables before drug administration was not identified. The same was true for opiate administration. At both 30 minutes and 2 minutes before intratracheal suctioning, there was no difference in electroencephalographical or clinical signs in patients who had or had not received drugs 10 minutes before suctioning. Among patients who received drugs, electroencephalographical parameters returned to baseline more rapidly. In those cases in which Ramsay score was assessed before the event, processed electroencephalography exhibited high variation. CONCLUSIONS: Unpleasant or painful stimuli and sedative and analgesic drugs are associated with significant changes in processed electroencephalographical parameters. However, clinical indications for drug administration were not reflected by these electroencephalographical parameters, and barely by sedation level before drug administration or tracheal suction. This precludes incorporation of entropy and bispectral index as target variables for sedation and analgesia protocols in critically ill patients.
Resumo:
BACKGROUND: We aimed to assess the value of a structured clinical assessment and genetic testing for refining the diagnosis of abacavir hypersensitivity reactions (ABC-HSRs) in a routine clinical setting. METHODS: We performed a diagnostic reassessment using a structured patient chart review in individuals who had stopped ABC because of suspected HSR. Two HIV physicians blinded to the human leukocyte antigen (HLA) typing results independently classified these individuals on a scale between 3 (ABC-HSR highly likely) and -3 (ABC-HSR highly unlikely). Scoring was based on symptoms, onset of symptoms and comedication use. Patients were classified as clinically likely (mean score > or =2), uncertain (mean score > or = -1 and < or = 1) and unlikely (mean score < or = -2). HLA typing was performed using sequence-based methods. RESULTS: From 131 reassessed individuals, 27 (21%) were classified as likely, 43 (33%) as unlikely and 61 (47%) as uncertain ABC-HSR. Of the 131 individuals with suspected ABC-HSR, 31% were HLA-B*5701-positive compared with 1% of 140 ABC-tolerant controls (P < 0.001). HLA-B*5701 carriage rate was higher in individuals with likely ABC-HSR compared with those with uncertain or unlikely ABC-HSR (78%, 30% and 5%, respectively, P < 0.001). Only six (7%) HLA-B*5701-negative individuals were classified as likely HSR after reassessment. CONCLUSIONS: HLA-B*5701 carriage is highly predictive of clinically diagnosed ABC-HSR. The high proportion of HLA-B*5701-negative individuals with minor symptoms among individuals with suspected HSR indicates overdiagnosis of ABC-HSR in the era preceding genetic screening. A structured clinical assessment and genetic testing could reduce the rate of inappropriate ABC discontinuation and identify individuals at high risk for ABC-HSR.
Resumo:
OBJECTIVE: Vital exhaustion and type D personality previously predicted mortality and cardiac events in patients with chronic heart failure (CHF). Reduced heart rate recovery (HRR) also predicts morbidity and mortality in CHF. We hypothesized that elevated levels of vital exhaustion and type D personality are both associated with decreased HRR. METHODS: Fifty-one patients with CHF (mean age 58+/-12 years, 82% men) and left ventricular ejection fraction (LVEF) =40% underwent standard exercise testing before receiving outpatient cardiac rehabilitation. They completed the 9-item short form of the Maastricht Vital Exhaustion Questionnaire and the 14-item type D questionnaire asking about negative affectivity and social inhibition. HRR was calculated as the difference between heart rate at the end of exercise and 1min after abrupt cessation of exercise (HRR-1). Regression analyses were adjusted for gender, age, LVEF, and maximum exercise capacity. RESULTS: Vital exhaustion explained 8.4% of the variance in continuous HRR-1 (p=0.045). For each point increase on the vital exhaustion score (range 0-18) there was a mean+/-SEM decrease of 0.54+/-0.26bpm in HRR-1. Type D personality showed a trend toward statistical significance for being associated with lower levels of HRR-1 explaining 6.5% of the variance (p<0.08). The likelihood of having HRR-1=18bpm was significantly higher in patients with type D personality than in those without (odds ratio=7.62, 95% CI 1.50-38.80). CONCLUSIONS: Elevated levels of vital exhaustion and type D personality were both independently associated with reduced HRR-1. The findings provide a hitherto not explored psychobiological explanation for poor cardiac outcome in patients with CHF.
Resumo:
Introduction Several recent studies have shown that a positive fluid balance in critical illness is associated with worse outcome. We tested the effects of moderate vs. high-volume resuscitation strategies on mortality, systemic and regional blood flows, mitochondrial respiration, and organ function in two experimental sepsis models. Methods 48 pigs were randomized to continuous endotoxin infusion, fecal peritonitis, and a control group (n = 16 each), and each group further to two different basal rates of volume supply for 24 hours [moderate-volume (10 ml/kg/h, Ringer's lactate, n = 8); high-volume (15 + 5 ml/kg/h, Ringer's lactate and hydroxyethyl starch (HES), n = 8)], both supplemented by additional volume boli, as guided by urinary output, filling pressures, and responses in stroke volume. Systemic and regional hemodynamics were measured and tissue specimens taken for mitochondrial function assessment and histological analysis. Results Mortality in high-volume groups was 87% (peritonitis), 75% (endotoxemia), and 13% (controls). In moderate-volume groups mortality was 50% (peritonitis), 13% (endotoxemia) and 0% (controls). Both septic groups became hyperdynamic. While neither sepsis nor volume resuscitation strategy was associated with altered hepatic or muscle mitochondrial complex I- and II-dependent respiration, non-survivors had lower hepatic complex II-dependent respiratory control ratios (2.6 +/- 0.7, vs. 3.3 +/- 0.9 in survivors; P = 0.01). Histology revealed moderate damage in all organs, colloid plaques in lung tissue of high-volume groups, and severe kidney damage in endotoxin high-volume animals. Conclusions High-volume resuscitation including HES in experimental peritonitis and endotoxemia increased mortality despite better initial hemodynamic stability. This suggests that the strategy of early fluid management influences outcome in sepsis. The high mortality was not associated with reduced mitochondrial complex I- or II-dependent muscle and hepatic respiration.
Resumo:
AIM: The purpose of this study was to systematically review the literature on the survival rates of palatal implants, Onplants((R)), miniplates and mini screws. MATERIAL AND METHODS: An electronic MEDLINE search supplemented by manual searching was conducted to identify randomized clinical trials, prospective and retrospective cohort studies on palatal implants, Onplants((R)), miniplates and miniscrews with a mean follow-up time of at least 12 weeks and of at least 10 units per modality having been examined clinically at a follow-up visit. Assessment of studies and data abstraction was performed independently by two reviewers. Reported failures of used devices were analyzed using random-effects Poisson regression models to obtain summary estimates and 95% confidence intervals (CI) of failure and survival proportions. RESULTS: The search up to January 2009 provided 390 titles and 71 abstracts with full-text analysis of 34 articles, yielding 27 studies that met the inclusion criteria. In meta-analysis, the failure rate for Onplants((R)) was 17.2% (95% CI: 5.9-35.8%), 10.5% for palatal implants (95% CI: 6.1-18.1%), 16.4% for miniscrews (95% CI: 13.4-20.1%) and 7.3% for miniplates (95% CI: 5.4-9.9%). Miniplates and palatal implants, representing torque-resisting temporary anchorage devices (TADs), when grouped together, showed a 1.92-fold (95% CI: 1.06-2.78) lower clinical failure rate than miniscrews. CONCLUSION: Based on the available evidence in the literature, palatal implants and miniplates showed comparable survival rates of >or=90% over a period of at least 12 weeks, and yielded superior survival than miniscrews. Palatal implants and miniplates for temporary anchorage provide reliable absolute orthodontic anchorage. If the intended orthodontic treatment would require multiple miniscrew placement to provide adequate anchorage, the reliability of such systems is questionable. For patients who are undergoing extensive orthodontic treatment, force vectors may need to be varied or the roots of the teeth to be moved may need to slide past the anchors. In this context, palatal implants or miniplates should be the TADs of choice.
Resumo:
BACKGROUND: Endoderm organ primordia become specified between gastrulation and gut tube folding in Amniotes. Although the requirement for RA signaling for the development of a few individual endoderm organs has been established a systematic assessment of its activity along the entire antero-posterior axis has not been performed in this germ layer. METHODOLOGY/PRINCIPAL FINDINGS: RA is synthesized from gastrulation to somitogenesis in the mesoderm that is close to the developing gut tube. In the branchial arch region specific levels of RA signaling control organ boundaries. The most anterior endoderm forming the thyroid gland is specified in the absence of RA signaling. Increasing RA in anterior branchial arches results in thyroid primordium repression and the induction of more posterior markers such as branchial arch Hox genes. Conversely reducing RA signaling shifts Hox genes posteriorly in endoderm. These results imply that RA acts as a caudalizing factor in a graded manner in pharyngeal endoderm. Posterior foregut and midgut organ primordia also require RA, but exposing endoderm to additional RA is not sufficient to expand these primordia anteriorly. We show that in chick, in contrast to non-Amniotes, RA signaling is not only necessary during gastrulation, but also throughout gut tube folding during somitogenesis. Our results show that the induction of CdxA, a midgut marker, and pancreas induction require direct RA signaling in endoderm. Moreover, communication between CdxA(+) cells is necessary to maintain CdxA expression, therefore synchronizing the cells of the midgut primordium. We further show that the RA pathway acts synergistically with FGF4 in endoderm patterning rather than mediating FGF4 activity. CONCLUSIONS/SIGNIFICANCE: Our work establishes that retinoic acid (RA) signaling coordinates the position of different endoderm organs along the antero-posterior axis in chick embryos and could serve as a basis for the differentiation of specific endodermal organs from ES cells.
Resumo:
OBJECTIVE: To compare image quality and radiation dose of thoracoabdominal computed tomography (CT) angiography at 80 and 100 kVp and to assess the feasibility of reducing contrast medium volume from 60 to 45 mL at 80 kVp. MATERIALS AND METHODS: This retrospective study had institutional review board approval; informed consent was waived. Seventy-five patients who had undergone thoracoabdominal 64-section multidetector-row CT angiography were divided into 3 groups of 25 patients each. Patients of groups A (tube voltage, 100 kVp) and B (tube voltage, 80 kVp) received 60 mL of contrast medium at 4 mL/s. Patients of group C (tube voltage, 80 kVp) received 45 mL of contrast medium at 3 mL/s. Mean aortoiliac attenuation, image noise, and contrast-to-noise ratio were assessed. The measurement of radiation dose was based on the volume CT dose index. Three independent readers assessed the diagnostic image quality. RESULTS: Mean aortoiliac attenuation for group B (621.1 +/- 90.5 HU) was significantly greater than for groups A and C (485.2 +/- 110.5 HU and 483.1 +/- 119.8 HU; respectively) (P < 0.001). Mean image noise was significantly higher for groups B and C than for group A (P < 0.05). The contrast-to-noise ratio did not significantly differ between the groups (group A, 35.0 +/- 13.8; group B, 31.7 +/- 10.1; group C, 27.3 +/- 11.5; P = 0.08). Mean volume CT dose index in groups B and C (5.2 +/- 0.4 mGy and 4.9 +/- 0.3 mGy, respectively) were reduced by 23.5% and 27.9%, respectively, compared with group A (6.8 +/- 0.8 mGy) (P < 0.001). The average overall diagnostic image quality for the 3 groups was graded as good or better. The score for group A was significantly higher than that for group C (P < 0.01), no difference was seen between group A and B (P = 0.92). CONCLUSIONS: Reduction of tube voltage from 100 to 80 kVp for thoracoabdominal CT angiography significantly reduces radiation dose without compromising image quality. Reduction of contrast medium volume to 45 mL at 80 kVp resulted in lower but still diagnostically acceptable image quality.
Resumo:
To assess the reliability of the Burdizzo procedure for castrating calves and lambs, testicular tissue from 63 bull calves (15 intact and 48 castrated) and 69 male lambs (35 intact and 34 castrated) was collected at slaughter and assessed histologically. The bull calves were castrated at either one, four to five or 12 to 16 weeks of age and the lambs at either one or 10 weeks. There was clear evidence of spermatogenesis in testicular tissue from all the intact animals. In the samples from the calves that had been castrated at 12 to 16 weeks functional testicular tissue was completely lacking. However, there was evidence of spermatogenesis and steroidogenesis in the calves that had been castrated at one week or four to five weeks, respectively. Failure to achieve complete involution of the testicular parenchyma was observed in the majority of lambs, irrespective of the age at which they had been castrated.
Resumo:
BACKGROUND Recommendations from international task forces on geriatric assessment emphasize the need for research including validation of cancer-specific geriatric assessment (C-SGA) tools in oncological settings. The objective of this study was to evaluate the feasibility of the SAKK Cancer-Specific Geriatric Assessment (C-SGA) in clinical practice. METHODS A cross sectional study of cancer patients >=65 years old (N = 51) with pathologically confirmed cancer presenting for initiation of chemotherapy treatment (07/01/2009-03/31/2011) at two oncology departments in Swiss canton hospitals: Kantonsspital Graubunden (KSGR N = 25), Kantonsspital St. Gallen (KSSG N = 26). Data was collected using three instruments, the SAKK C-SGA plus physician and patient evaluation forms. The SAKK C-SGA includes six measures covering five geriatric assessment domains (comorbidity, function, psychosocial, nutrition, cognition) using a mix of medical record abstraction (MRA) and patient interview. Five individual domains and one overall SAKK C-SGA score were calculated and dichotomized as below/above literature-based cut-offs. The SAKK C-SGA was evaluated by: patient and physician estimated time to complete, ease of completing, and difficult or unanswered questions. RESULTS Time to complete the patient questionnaire was considered acceptable by almost all (>=96%) patients and physicians. Patients reported slightly shorter times to complete the questionnaire than physicians (17.33 +/- 7.34 vs. 20.59 +/- 6.53 minutes, p = 0.02). Both groups rated the patient questionnaire as easy/fairly easy to complete (91% vs. 84% respectively, p = 0.14) with few difficult or unanswered questions. The MRA took on average 8.32 +/- 4.72 minutes to complete. Physicians (100%) considered time to complete MRA acceptable, 96% rated it as easy/fairly easy to complete. Individual study site populations differed on health-related characteristics (excellent/good physician-rated general health KSGR 71% vs. KSSG 32%, p = 0.007). The overall mean C-SGA score was 2.4 +/- 1.12. Patients at KSGR had lower C-SGA scores (2.00 +/- 1.19 vs. 2.81 +/- 0.90, p = 0.009) and a smaller proportion (28% vs.65%, p = 0.008) was above the C-SGA cut-off score compared to KSSG. CONCLUSIONS These results suggest the SAKK C-SGA is a feasible practical tool for use in clinical practice. It demonstrated discriminative ability based on objective geriatric assessment measures, but additional investigations on use for clinical decision-making are warranted. The SAKK C-SGA also provides important usable domain information for intervention to optimize outcomes in older cancer patients.
Resumo:
OBJECTIVE The aim of this study was to assess the association between frailty and risk for heart failure (HF) in older adults. BACKGROUND Frailty is common in the elderly and is associated with adverse health outcomes. Impact of frailty on HF risk is not known. METHODS We assessed the association between frailty, using the Health ABC Short Physical Performance Battery (HABC Battery) and the Gill index, and incident HF in 2825 participants aged 70 to 79 years. RESULTS Mean age of participants was 74 ± 3 years; 48% were men and 59% were white. During a median follow up of 11.4 (7.1-11.7) years, 466 participants developed HF. Compared to non-frail participants, moderate (HR 1.36, 95% CI 1.08-1.71) and severe frailty (HR 1.88, 95% CI 1.02-3.47) by Gill index was associated with a higher risk for HF. HABC Battery score was linearly associated with HF risk after adjusting for the Health ABC HF Model (HR 1.24, 95% CI 1.13-1.36 per SD decrease in score) and remained significant when controlled for death as a competing risk (HR 1.30; 95% CI 1.00-1.55). Results were comparable across age, sex, and race, and in sub-groups based on diabetes mellitus or cardiovascular disease at baseline. Addition of HABC Battery scores to the Health ABC HF Risk Model improved discrimination (change in C-index, 0.014; 95% CI 0.018-0.010) and appropriately reclassified 13.4% (net-reclassification-improvement 0.073, 95% CI 0.021-0.125; P = .006) of participants (8.3% who developed HF and 5.1% who did not). CONCLUSIONS Frailty is independently associated with risk of HF in older adults.
Resumo:
OBJECTIVE: Anaemia in rheumatoid arthritis (RA) is prototypical of the chronic disease type and is often neglected in clinical practice. We studied anaemia in relation to disease activity, medications and radiographic progression. METHODS: Data were collected between 1996 and 2007 over a mean follow-up of 2.2 years. Anaemia was defined according to WHO (♀ haemoglobin<12 g/dl, ♂: haemoglobin<13 g/dl), or alternative criteria. Anaemia prevalence was studied in relation to disease parameters and pharmacological therapy. Radiographic progression was analysed in 9731 radiograph sets from 2681 patients in crude longitudinal regression models and after adjusting for potential confounding factors, including the clinical disease activity score with the 28-joint count for tender and swollen joints and erythrocyte sedimentation rate (DAS28ESR) or the clinical disease activity index (cDAI), synthetic antirheumatic drugs and antitumour necrosis factor (TNF) therapy. RESULTS: Anaemia prevalence decreased from more than 24% in years before 2001 to 15% in 2007. Erosions progressed significantly faster in patients with anaemia (p<0.001). Adjusted models showed these effects independently of clinical disease activity and other indicators of disease severity. Radiographic damage progression rates were increasing with severity of anaemia, suggesting a 'dose-response effect'. The effect of anaemia on damage progression was maintained in subgroups of patients treated with TNF blockade or corticosteroids, and without non-selective nonsteroidal anti-inflammatory drugs (NSAIDs). CONCLUSIONS: Anaemia in RA appears to capture disease processes that remain unmeasured by established disease activity measures in patients with or without TNF blockade, and may help to identify patients with more rapid erosive disease.