182 resultados para Chest
Resumo:
The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.
Resumo:
BACKGROUND Estimates of prevalence of wheeze depend on questionnaires. However, wording of questions may vary between studies. We investigated effects of alternative wording on estimates of prevalence and severity of wheeze, and associations with risk factors. METHODS White and South Asian children from a population-based cohort (UK) were randomly assigned to two groups and followed up at one, four and six years (1998, 2001, 2003). Parents were asked either if their child ever had "attacks of wheeze" (attack group, N=535), or "wheezing or whistling in the chest" (whistling group, N=2859). All other study aspects were identical, including questions about other respiratory symptoms. RESULTS Prevalence of wheeze ever was lower in the attack group than in the whistling group for all surveys (32 vs. 40% in white children aged one year, p<0.001). Prevalence of other respiratory symptoms did not differ between groups. Wheeze tended to be more severe in the attack group. The strength of association with risk factors was comparable in the two groups. CONCLUSIONS The wording of questions on wheeze can affect estimates of prevalence, but has less impact on measured associations with risk factors. Question wording is a potential source of between-study-heterogeneity in meta-analyses.
Resumo:
OBJECTIVE To evaluate whether magnetic resonance imaging (MRI) is effective as computed tomography (CT) in determining morphologic and functional pulmonary changes in patients with cystic fibrosis (CF) in association with multiple clinical parameters. MATERIALS AND METHODS Institutional review board approval and patient written informed consent were obtained. In this prospective study, 30 patients with CF (17 men and 13 women; mean (SD) age, 30.2 (9.2) years; range, 19-52 years) were included. Chest CT was acquired by unenhanced low-dose technique for clinical purposes. Lung MRI (1.5 T) comprised T2- and T1-weighted sequences before and after the application of 0.1-mmol·kg gadobutrol, also considering lung perfusion imaging. All CT and MR images were visually evaluated by using 2 different scoring systems: the modified Helbich and the Eichinger scores. Signal intensity of the peribronchial walls and detected mucus on T2-weighted images as well as signal enhancement of the peribronchial walls on contrast-enhanced T1-weighted sequences were additionally assessed on MRI. For the clinical evaluation, the pulmonary exacerbation rate, laboratory, and pulmonary functional parameters were determined. RESULTS The overall modified Helbich CT score had a mean (SD) of 15.3 (4.8) (range, 3-21) and median of 16.0 (interquartile range [IQR], 6.3). The overall modified Helbich MR score showed slightly, not significantly, lower values (Wilcoxon rank sum test and Student t test; P > 0.05): mean (SD) of 14.3 (4.7) (range, 3-20) and median of 15.0 (IQR, 7.3). Without assessment of perfusion, the overall Eichinger score resulted in the following values for CT vs MR examinations: mean (SD), 20.3 (7.2) (range, 4-31); and median, 21.0 (IQR, 9.5) vs mean (SD), 19.5 (7.1) (range, 4-33); and median, 20.0 (IQR, 9.0). All differences between CT and MR examinations were not significant (Wilcoxon rank sum tests and Student t tests; P > 0.05). In general, the correlations of the CT scores (overall and different imaging parameters) to the clinical parameters were slightly higher compared to the MRI scores. However, if all additional MRI parameters were integrated into the scoring systems, the correlations reached the values of the CT scores. The overall image quality was significantly higher for the CT examinations compared to the MRI sequences. CONCLUSIONS One major diagnostic benefit of lung MRI in CF is the possible acquisition of several different morphologic and functional imaging features without the use of any radiation exposure. Lung MRI shows reliable associations with CT and clinical parameters, which suggests its implementation in CF for routine diagnosis, which would be particularly important in follow-up imaging over the long term.
Resumo:
PURPOSE Lymphangioleiomyomatosis (LAM) is characterized by proliferation of smooth muscle tissue that causes bronchial obstruction and secondary cystic destruction of lung parenchyma. The aim of this study was to evaluate the typical distribution of cystic defects in LAM with quantitative volumetric chest computed tomography (CT). MATERIALS AND METHODS CT examinations of 20 patients with confirmed LAM were evaluated with region-based quantification of lung parenchyma. Additionally, 10 consecutive patients were identified who had recently undergone CT imaging of the lung at our institution, in which no pathologies of the lung were found, to serve as a control group. Each lung was divided into three regions (upper, middle and lower thirds) with identical number of slices. In addition, we defined a "peel" and "core" of the lung comprising the 2 cm subpleural space and the remaining inner lung area. Computerized detection of lung volume and relative emphysema was performed with the PULMO 3D software (v3.42, Fraunhofer MEVIS, Bremen, Germany). This software package enables the quantification of emphysematous lung parenchyma by calculating the pixel index, which is defined as the ratio of lung voxels with a density <-950HU to the total number of voxels in the lung. RESULTS Cystic changes accounted for 0.1-39.1% of the total lung volume in patients with LAM. Disease manifestation in the central lung was significantly higher than in peripheral areas (peel median: 15.1%, core median: 20.5%; p=0.001). Lower thirds of lung parenchyma showed significantly less cystic changes than upper and middle lung areas combined (lower third: median 13.4, upper and middle thirds: median 19.0, p=0.001). CONCLUSION The distribution of cystic lesions in LAM is significantly more pronounced in the central lung compared to peripheral areas. There is a significant predominance of cystic changes in apical and intermediate lung zones compared to the lung bases.
Resumo:
BACKGROUND The global burden of childhood tuberculosis (TB) is estimated to be 0.5 million new cases per year. Human immunodeficiency virus (HIV)-infected children are at high risk for TB. Diagnosis of TB in HIV-infected children remains a major challenge. METHODS We describe TB diagnosis and screening practices of pediatric antiretroviral treatment (ART) programs in Africa, Asia, the Caribbean, and Central and South America. We used web-based questionnaires to collect data on ART programs and patients seen from March to July 2012. Forty-three ART programs treating children in 23 countries participated in the study. RESULTS Sputum microscopy and chest Radiograph were available at all programs, mycobacterial culture in 40 (93%) sites, gastric aspiration in 27 (63%), induced sputum in 23 (54%), and Xpert MTB/RIF in 16 (37%) sites. Screening practices to exclude active TB before starting ART included contact history in 41 sites (84%), symptom screening in 38 (88%), and chest Radiograph in 34 sites (79%). The use of diagnostic tools was examined among 146 children diagnosed with TB during the study period. Chest Radiograph was used in 125 (86%) children, sputum microscopy in 76 (52%), induced sputum microscopy in 38 (26%), gastric aspirate microscopy in 35 (24%), culture in 25 (17%), and Xpert MTB/RIF in 11 (8%) children. CONCLUSIONS Induced sputum and Xpert MTB/RIF were infrequently available to diagnose childhood TB, and screening was largely based on symptom identification. There is an urgent need to improve the capacity of ART programs in low- and middle-income countries to exclude and diagnose TB in HIV-infected children.
Resumo:
PURPOSE To determine the effect of the use of iodinated contrast agents on the formation of DNA double-strand breaks during chest computed tomography (CT). MATERIALS AND METHODS This study was approved by the institutional review board, and written informed consent was obtained from all patients. This single-center study was performed at a university hospital. A total of 179 patients underwent contrast material-enhanced CT, and 66 patients underwent unenhanced CT. Blood samples were taken from these patients prior to and immediately after CT. In these blood samples, the average number of phosphorylated histone H2AX (γH2AX) foci per lymphocyte was determined with fluorescence microscopy. Significant differences between the number of foci that developed in both the presence and the absence of the contrast agent were tested by using an independent sample t test. RESULTS γH2AX foci levels were increased in both groups after CT. Patients who underwent contrast-enhanced CT had an increased amount of DNA radiation damage (mean increase ± standard error of the mean, 0.056 foci per cell ± 0.009). This increase was 107% ± 19 higher than that in patients who underwent unenhanced CT (mean increase, 0.027 foci per cell ± 0.014). CONCLUSION The application of iodinated contrast agents during diagnostic x-ray procedures, such as chest CT, leads to a clear increase in the level of radiation-induced DNA damage as assessed with γH2AX foci formation.
Resumo:
Pleural infection is a frequent clinical condition. Prompt treatment has been shown to reduce hospital costs, morbidity and mortality. Recent advances in treatment have been variably implemented in clinical practice. This statement reviews the latest developments and concepts to improve clinical management and stimulate further research. The European Association for Cardio-Thoracic Surgery (EACTS) Thoracic Domain and the EACTS Pleural Diseases Working Group established a team of thoracic surgeons to produce a comprehensive review of available scientific evidence with the aim to cover all aspects of surgical practice related to its treatment, in particular focusing on: surgical treatment of empyema in adults; surgical treatment of empyema in children; and surgical treatment of post-pneumonectomy empyema (PPE). In the management of Stage 1 empyema, prompt pleural space chest tube drainage is required. In patients with Stage 2 or 3 empyema who are fit enough to undergo an operative procedure, there is a demonstrated benefit of surgical debridement or decortication [possibly by video-assisted thoracoscopic surgery (VATS)] over tube thoracostomy alone in terms of treatment success and reduction in hospital stay. In children, a primary operative approach is an effective management strategy, associated with a lower mortality rate and a reduction of tube thoracostomy duration, length of antibiotic therapy, reintervention rate and hospital stay. Intrapleural fibrinolytic therapy is a reasonable alternative to primary operative management. Uncomplicated PPE [without bronchopleural fistula (BPF)] can be effectively managed with minimally invasive techniques, including fenestration, pleural space irrigation and VATS debridement. PPE associated with BPF can be effectively managed with individualized open surgical techniques, including direct repair, myoplastic and thoracoplastic techniques. Intrathoracic vacuum-assisted closure may be considered as an adjunct to the standard treatment. The current literature cements the role of VATS in the management of pleural empyema, even if the choice of surgical approach relies on the individual surgeon's preference.
Resumo:
A 37-year-old man presented with a 4-day history of nonbloody diarrhea, fever, chills, productive cough, vomiting, and more recent sore throat. He worked for the municipality in a village in the Swiss Alps near St. Moritz. Examination showed fever (40 °C), hypotension, tachycardia, tachypnea, decreased oxygen saturation (90 % at room air), and bibasilar crackles and wheezing. Chest radiography and computed tomography scan showed an infiltrate in the left upper lung lobe. He responded to empiric therapy with imipenem for 5 days. After the imipenem was stopped, the bacteriology laboratory reported that 2/2 blood cultures showed growth of Francisella tularensis. He had recurrence of fever and diarrhea. He was treated with ciprofloxacin (500 mg twice daily, oral, for 14 days) and symptoms resolved. Further testing confirmed that the isolate was F. tularensis (subspecies holarctica) belonging to the subclade B.FTNF002-00 (Western European cluster). This case may alert physicians that tularemia may occur in high-altitude regions such as the Swiss Alps.
Resumo:
OBJETIVES The main objective of the present randomized pilot study was to explore the effects of upstream prasugrel or ticagrelor or clopidogrel for patients with ST-segment-elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PCI). BACKGROUND Administration of clopidogrel "as soon as possible" has been advocated for STEMI. Pretreatment with prasugrel and ticagrelor may improve reperfusion. Currently, the angiographic effects of upstream administration of these agents are poorly understood. METHODS A total of 132 patients with STEMI within the first 12 hr of chest pain referred to primary angioplasty were randomized to upstream clopidogrel (600 mg), prasugrel (60 mg), or ticagrelor (180 mg) while still in the emergency room. All patients underwent protocol-mandated thrombus aspiration. RESULTS Macroscopic thrombus material was retrieved in 79.5% of the clopidogrel group, 65.9% of the prasugrel group, and 54.3% of the ticagrelor group (P = 0.041). At baseline angiography, large thrombus burden was 97.7% vs. 87.8% vs. 80.4% in the clopidogrel, prasugrel, and ticagrelor groups, respectively (P = 0.036). Also, at baseline, 97.7% presented with an occluded target vessel in the clopidogrel group, 87.8% in the prasugrel group and 78.3% in the ticagrelor group (P = 0.019). At the end of the procedure, the percentages of patients with combined TIMI grade III flow and myocardial blush grade III were 52.3% for clopidogrel, 80.5% for prasugrel, and 67.4% for ticagrelor (P = 0.022). CONCLUSIONS In patients with STEMI undergoing primary PCI within 12 hr, upstream clopidogrel, prasugrel or ticagrelor have varying angiographic findings, with a trend toward better results for the latter two agents. © 2015 Wiley Periodicals, Inc.
Resumo:
OBJECTIVE There is controversy regarding the significance of radiological consolidation in the context of COPD exacerbation (eCOPD). While some studies into eCOPD exclude these cases, consolidation is a common feature of eCOPD admissions in real practice. This study aims to address the question of whether consolidation in eCOPD is a distinct clinical phenotype with implications for management decisions and outcomes. PATIENTS AND METHODS The European COPD Audit was carried out in 384 hospitals from 13 European countries between 2010 and 2011 to analyze guideline adherence in eCOPD. In this analysis, admissions were split according to the presence or not of consolidation on the admission chest radiograph. Groups were compared in terms of clinical and epidemiological features, existing treatment, clinical care utilized and mortality. RESULTS 14,111 cases were included comprising 2,714 (19.2%) with consolidation and 11,397 (80.8%) without. The risk of radiographic consolidation increased with age, female gender, cardiovascular diseases, having had two or more admissions in the previous year, and sputum color change. Previous treatment with inhaled steroids was not associated. Patients with radiographic consolidation were significantly more likely to receive antibiotics, oxygen and non-invasive ventilation during the admission and had a lower survival from admission to 90-day follow-up. CONCLUSIONS Patients admitted for COPD exacerbation who have radiological consolidation have a more severe illness course, are treated more intensively by clinicians and have a poorer prognosis. We recommend that these patients be considered a distinct subset in COPD exacerbation.
Resumo:
BACKGROUND Tuberculosis (TB) is a poverty-related disease that is associated with poor living conditions. We studied TB mortality and living conditions in Bern between 1856 and 1950. METHODS We analysed cause-specific mortality based on mortality registers certified by autopsies, and public health reports 1856 to 1950 from the city council of Bern. RESULTS TB mortality was higher in the Black Quarter (550 per 100,000) and in the city centre (327 per 100,000), compared to the outskirts (209 per 100,000 in 1911-1915). TB mortality correlated positively with the number of persons per room (r = 0.69, p = 0.026), the percentage of rooms without sunlight (r = 0.72, p = 0.020), and negatively with the number of windows per apartment (r = -0.79, p = 0.007). TB mortality decreased 10-fold from 330 per 100,000 in 1856 to 33 per 100,000 in 1950, as housing conditions improved, indoor crowding decreased, and open-air schools, sanatoria, systematic tuberculin skin testing of school children and chest radiography screening were introduced. CONCLUSIONS Improved living conditions and public health measures may have contributed to the massive decline of the TB epidemic in the city of Bern even before effective antibiotic treatment became finally available in the 1950s.
Resumo:
Whether anticoagulation management practices are associated with improved outcomes in elderly patients with acute venous thromboembolism (VTE) is uncertain. Thus, we aimed to examine whether practices recommended by the American College of Chest Physicians guidelines are associated with outcomes in elderly patients with VTE. We studied 991 patients aged ≥65 years with acute VTE in a Swiss prospective multicenter cohort study and assessed the adherence to four management practices: parenteral anticoagulation ≥5 days, INR ≥2.0 for ≥24 hours before stopping parenteral anticoagulation, early start with vitamin K antagonists (VKA) ≤24 hours of VTE diagnosis, and the use of low-molecular-weight heparin (LMWH) or fondaparinux. The outcomes were all-cause mortality, VTE recurrence, and major bleeding at 6 months, and the length of hospital stay (LOS). We used Cox regression and lognormal survival models, adjusting for patient characteristics. Overall, 9% of patients died, 3% had VTE recurrence, and 7% major bleeding. Early start with VKA was associated with a lower risk of major bleeding (adjusted hazard ratio 0.37, 95% CI 0.20-0.71). Early start with VKA (adjusted time ratio [TR] 0.77, 95% CI 0.69-0.86) and use of LMWH/fondaparinux (adjusted TR 0.87, 95% CI 0.78-0.97) were associated with a shorter LOS. An INR ≥2.0 for ≥24 hours before stopping parenteral anticoagulants was associated with a longer LOS (adjusted TR 1.2, 95% CI 1.08-1.33). In elderly patients with VTE, the adherence to recommended anticoagulation management practices showed mixed results. In conclusion, only early start with VKA and use of parenteral LMWH/fondaparinux were associated with better outcomes.
Resumo:
BACKGROUND The choice of imaging techniques in patients with suspected coronary artery disease (CAD) varies between countries, regions, and hospitals. This prospective, multicenter, comparative effectiveness study was designed to assess the relative accuracy of commonly used imaging techniques for identifying patients with significant CAD. METHODS AND RESULTS A total of 475 patients with stable chest pain and intermediate likelihood of CAD underwent coronary computed tomographic angiography and stress myocardial perfusion imaging by single photon emission computed tomography or positron emission tomography, and ventricular wall motion imaging by stress echocardiography or cardiac magnetic resonance. If ≥1 test was abnormal, patients underwent invasive coronary angiography. Significant CAD was defined by invasive coronary angiography as >50% stenosis of the left main stem, >70% stenosis in a major coronary vessel, or 30% to 70% stenosis with fractional flow reserve ≤0.8. Significant CAD was present in 29% of patients. In a patient-based analysis, coronary computed tomographic angiography had the highest diagnostic accuracy, the area under the receiver operating characteristics curve being 0.91 (95% confidence interval, 0.88-0.94), sensitivity being 91%, and specificity being 92%. Myocardial perfusion imaging had good diagnostic accuracy (area under the curve, 0.74; confidence interval, 0.69-0.78), sensitivity 74%, and specificity 73%. Wall motion imaging had similar accuracy (area under the curve, 0.70; confidence interval, 0.65-0.75) but lower sensitivity (49%, P<0.001) and higher specificity (92%, P<0.001). The diagnostic accuracy of myocardial perfusion imaging and wall motion imaging were lower than that of coronary computed tomographic angiography (P<0.001). CONCLUSIONS In a multicenter European population of patients with stable chest pain and low prevalence of CAD, coronary computed tomographic angiography is more accurate than noninvasive functional testing for detecting significant CAD defined invasively. CLINICAL TRIAL REGISTRATION URL http://www.clinicaltrials.gov. Unique identifier: NCT00979199.
Resumo:
BACKGROUND Panic disorder is characterised by the presence of recurrent unexpected panic attacks, discrete periods of fear or anxiety that have a rapid onset and include symptoms such as racing heart, chest pain, sweating and shaking. Panic disorder is common in the general population, with a lifetime prevalence of 1% to 4%. A previous Cochrane meta-analysis suggested that psychological therapy (either alone or combined with pharmacotherapy) can be chosen as a first-line treatment for panic disorder with or without agoraphobia. However, it is not yet clear whether certain psychological therapies can be considered superior to others. In order to answer this question, in this review we performed a network meta-analysis (NMA), in which we compared eight different forms of psychological therapy and three forms of a control condition. OBJECTIVES To assess the comparative efficacy and acceptability of different psychological therapies and different control conditions for panic disorder, with or without agoraphobia, in adults. SEARCH METHODS We conducted the main searches in the CCDANCTR electronic databases (studies and references registers), all years to 16 March 2015. We conducted complementary searches in PubMed and trials registries. Supplementary searches included reference lists of included studies, citation indexes, personal communication to the authors of all included studies and grey literature searches in OpenSIGLE. We applied no restrictions on date, language or publication status. SELECTION CRITERIA We included all relevant randomised controlled trials (RCTs) focusing on adults with a formal diagnosis of panic disorder with or without agoraphobia. We considered the following psychological therapies: psychoeducation (PE), supportive psychotherapy (SP), physiological therapies (PT), behaviour therapy (BT), cognitive therapy (CT), cognitive behaviour therapy (CBT), third-wave CBT (3W) and psychodynamic therapies (PD). We included both individual and group formats. Therapies had to be administered face-to-face. The comparator interventions considered for this review were: no treatment (NT), wait list (WL) and attention/psychological placebo (APP). For this review we considered four short-term (ST) outcomes (ST-remission, ST-response, ST-dropouts, ST-improvement on a continuous scale) and one long-term (LT) outcome (LT-remission/response). DATA COLLECTION AND ANALYSIS As a first step, we conducted a systematic search of all relevant papers according to the inclusion criteria. For each outcome, we then constructed a treatment network in order to clarify the extent to which each type of therapy and each comparison had been investigated in the available literature. Then, for each available comparison, we conducted a random-effects meta-analysis. Subsequently, we performed a network meta-analysis in order to synthesise the available direct evidence with indirect evidence, and to obtain an overall effect size estimate for each possible pair of therapies in the network. Finally, we calculated a probabilistic ranking of the different psychological therapies and control conditions for each outcome. MAIN RESULTS We identified 1432 references; after screening, we included 60 studies in the final qualitative analyses. Among these, 54 (including 3021 patients) were also included in the quantitative analyses. With respect to the analyses for the first of our primary outcomes, (short-term remission), the most studied of the included psychological therapies was CBT (32 studies), followed by BT (12 studies), PT (10 studies), CT (three studies), SP (three studies) and PD (two studies).The quality of the evidence for the entire network was found to be low for all outcomes. The quality of the evidence for CBT vs NT, CBT vs SP and CBT vs PD was low to very low, depending on the outcome. The majority of the included studies were at unclear risk of bias with regard to the randomisation process. We found almost half of the included studies to be at high risk of attrition bias and detection bias. We also found selective outcome reporting bias to be present and we strongly suspected publication bias. Finally, we found almost half of the included studies to be at high risk of researcher allegiance bias.Overall the networks appeared to be well connected, but were generally underpowered to detect any important disagreement between direct and indirect evidence. The results showed the superiority of psychological therapies over the WL condition, although this finding was amplified by evident small study effects (SSE). The NMAs for ST-remission, ST-response and ST-improvement on a continuous scale showed well-replicated evidence in favour of CBT, as well as some sparse but relevant evidence in favour of PD and SP, over other therapies. In terms of ST-dropouts, PD and 3W showed better tolerability over other psychological therapies in the short term. In the long term, CBT and PD showed the highest level of remission/response, suggesting that the effects of these two treatments may be more stable with respect to other psychological therapies. However, all the mentioned differences among active treatments must be interpreted while taking into account that in most cases the effect sizes were small and/or results were imprecise. AUTHORS' CONCLUSIONS There is no high-quality, unequivocal evidence to support one psychological therapy over the others for the treatment of panic disorder with or without agoraphobia in adults. However, the results show that CBT - the most extensively studied among the included psychological therapies - was often superior to other therapies, although the effect size was small and the level of precision was often insufficient or clinically irrelevant. In the only two studies available that explored PD, this treatment showed promising results, although further research is needed in order to better explore the relative efficacy of PD with respect to CBT. Furthermore, PD appeared to be the best tolerated (in terms of ST-dropouts) among psychological treatments. Unexpectedly, we found some evidence in support of the possible viability of non-specific supportive psychotherapy for the treatment of panic disorder; however, the results concerning SP should be interpreted cautiously because of the sparsity of evidence regarding this treatment and, as in the case of PD, further research is needed to explore this issue. Behaviour therapy did not appear to be a valid alternative to CBT as a first-line treatment for patients with panic disorder with or without agoraphobia.
Resumo:
Symptoms of primary ciliary dyskinesia (PCD) are nonspecific and guidance on whom to refer for testing is limited. Diagnostic tests for PCD are highly specialised, requiring expensive equipment and experienced PCD scientists. This study aims to develop a practical clinical diagnostic tool to identify patients requiring testing.Patients consecutively referred for testing were studied. Information readily obtained from patient history was correlated with diagnostic outcome. Using logistic regression, the predictive performance of the best model was tested by receiver operating characteristic curve analyses. The model was simplified into a practical tool (PICADAR) and externally validated in a second diagnostic centre.Of 641 referrals with a definitive diagnostic outcome, 75 (12%) were positive. PICADAR applies to patients with persistent wet cough and has seven predictive parameters: full-term gestation, neonatal chest symptoms, neonatal intensive care admittance, chronic rhinitis, ear symptoms, situs inversus and congenital cardiac defect. Sensitivity and specificity of the tool were 0.90 and 0.75 for a cut-off score of 5 points. Area under the curve for the internally and externally validated tool was 0.91 and 0.87, respectively.PICADAR represents a simple diagnostic clinical prediction rule with good accuracy and validity, ready for testing in respiratory centres referring to PCD centres.