957 resultados para Clinical severity score


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Adult community-acquired pneumonia (CAP) is a relevant worldwide cause of morbidity and mortality, however the aetiology often remains uncertain and the therapy is empirical. We applied conventional and molecular diagnostics to identify viruses and atypical bacteria associated with CAP in Chile. Methods We used sputum and blood cultures, IgG/IgM serology and molecular diagnostic techniques (PCR, reverse transcriptase PCR) for detection of classical and atypical bacteria (Mycoplasma pneumoniae, Chlamydia pneumoniae, Legionella pneumoniae) and respiratory viruses (adenovirus, respiratory syncytial virus (RSV), human metapneumovirus, influenza virus, parainfluenzavirus, rhinovirus, coronavirus) in adults >18 years old presenting with CAP in Santiago from February 2005 to September 2007. Severity was qualified at admission by Fine's pneumonia severity index. Results Overall detection in 356 enrolled adults were 92 (26%) cases of a single bacterial pathogen, 80 (22%) cases of a single viral pathogen, 60 (17%) cases with mixed bacterial and viral infection and 124 (35%) cases with no identified pathogen. Streptococcus pneumoniae and RSV were the most common bacterial and viral pathogens identified. Infectious agent detection by PCR provided greater sensitivity than conventional techniques. To our surprise, no relationship was observed between clinical severity and sole or coinfections. Conclusions The use of molecular diagnostics expanded the detection of viruses and atypical bacteria in adults with CAP, as unique or coinfections. Clinical severity and outcome were independent of the aetiological agents detected.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: Ectopic calcification and mediacalcinosis can be promoted by corticosteroid use. Aim of the present investigation is to describe macrovascular disease features in patients with long-term corticosteroid therapy and symptomatic lower limb peripheral arterial occlusive disease (PAD). METHODS: A consecutive series of 2783 patients undergoing clinical and angiographic work-up of PAD were screened for long-term (>5 years) corticosteroid use (group A). Comparison was performed to a randomly selected age-, sex- and risk factor-matched PAD control cohort from the same series without corticosteroid use (group B). Patients with diabetes mellitus or severe renal failure were excluded. Arterial calcification was evaluated by qualitative assessment on radiographic images. Severity of atherosclerotic lesions was analysed from angiographic images using a semi-quantitative score (Bollinger score). RESULTS: In general, 12 patients (5 males, mean age 78.5 +/- 9.0 years) with 15 ischaemic limbs qualified to be enrolled in group A and were compared to 23 matching control patients (6 2 males, mean age 79.5 +/- 6 years) with 32 ischaemic limbs. Incompressibility of ankle arteries determined by measurement of the ankle-brachial index was seen in 12 limbs (80%) in group A compared to 3 limbs (9%) in group B (p = 0.0009). No significant difference was found comparing group A and B for segmental calcification, whereas comparison of the atherosclerotic burden using the angiographic severity score showed a significantly higher score at the infragenicular arterial level in group A (p = 0.001). CONCLUSION: Findings suggest that the long-term corticosteroid therapy is associated with a distally accentuated, calcifying peripheral atherosclerosis inducing arterial incompressibility. This occlusion pattern is comparable to patients with renal failure or diabetes. Further research is required to support our observations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is growing evidence that aberrant innate immune responses towards the bacterial flora of the gut play a role in the pathogenesis of canine inflammatory bowel disease (IBD). Toll-like receptors (TLR) play an important role as primary sensors of invading pathogens and have gained significant attention in human IBD as differential expression and polymorphisms of certain TLR have been shown to occur in ulcerative colitis (UC) and Crohn's disease (CD). The aim of the current study was to evaluate the expression of two TLR important for recognition of commensals in the gut. TLR2 and TLR4 mRNA expression in duodenal biopsies from dogs with IBD was measured and correlated with clinical and histological disease severity. Endoscopic duodenal biopsies from 20 clinical cases and 7 healthy control dogs were used to extract mRNA. TLR2 and TLR4 mRNA expression was assessed using quantitative real-time PCR. TLR2 mRNA expression was significantly increased in the IBD dogs compared to controls, whereas TLR4 mRNA expression was similar in IBD and control cases. In addition, TLR2 mRNA expression was mildly correlated with clinical severity of disease, however, there was no correlation between TLR2 expression and histological severity of disease.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PRINCIPALS Accidents in agriculture are a problem of global importance. The hazards of working in agriculture are manifold (machines, animals, heights). We therefore assessed injury severity and mortality from accidents in farming. METHODS We retrospectively analysed all farming accidents treated over a 12-year period in the emergency department (ED) of our level I trauma centre. RESULTS Out of 815 patients 96.3% were male and 3.7% female (p <0.0001). A total of 70 patients (8.6%, 70/815) were severely injured. Patients with injuries to the chest were most likely to suffer from severe injuries (odds ratio [OR] 9.45, 95% confidence interval [CI] 5.59-16.00, p <0.0001), followed by patients with injuries to the abdomen (OR 7.06, 95% CI 3.22-15.43, p <0.0001) and patients with injuries to the head (OR 5.03, 95% CI 2.99-8.66, p <0.0001). Hospitalisation was associated with machine- and fall-related injuries (OR 22.39, 95% CI 1.95-4.14, p <0.0001 and OR 2.84 95% CI 1.68-3.41 p <0.001, respectively). Patients suffering from a fall and patients with severe injury were more likely to die than others (OR 3.32, 95% CI 1.07-10.29, p <0.037 and OR 9.17, 95% CI 6.20-13.56, p <0.0001, respectively). Fall height correlated positively with the injury severity score , hospitalisation and mortality (all p <0.0001). CONCLUSION Injuries in agriculture are accompanied by substantial morbidity and mortality, and range from minor injuries to severe multiple injuries. Additional prospective studies should be conducted on injury severity, long-term disability and mortality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Our knowledge of factors influencing mortality of patients with pelvic ring injuries and the impact of associated injuries is currently based on limited information. Questions/purposes Weidentified the (1) causes and time of death, (2) demography, and (3) pattern and severity of injuries in patients with pelvic ring fractures who did not survive. Methods We prospectively collected data on 5340 patients listed in the German Pelvic Trauma Registry between April 30, 2004 and July 29, 2011; 3034 of 5340 (57%) patientswere female. Demographic data and parameters indicating the type and severity of injury were recorded for patients who died in hospital (nonsurvivors) and compared with data of patients who survived (survivors). The median followup was 13 days (range, 0–1117 days). Results A total of 238 (4%) patients died a median of 2 days after trauma. The main cause of death was massive bleeding (34%), predominantly from the pelvic region (62% of all patients who died because of massive bleeding). Fifty six percent of nonsurvivors and 43% of survivors were male. Nonsurvivors were characterized by a higher incidence of complex pelvic injuries (32% versus 8%), less isolated pelvic ring fractures (13% versus 49%), lower initial blood hemoglobin concentration (6.7 ± 2.9 versus 9.8 ± 3.0 g/dL) and systolic arterial blood pressure (77 ± 27 versus 106 ± 24 mmHg), and higher injury severity score (ISS) (35 ± 16 versus 15 ± 12). Conclusion Patients with pelvic fractures who did not survive were characterized by male gender, severe multiple trauma, and major hemorrhage.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Although well-established for suspected lower limb deep venous thrombosis, an algorithm combining a clinical decision score, d-dimer testing, and ultrasonography has not been evaluated for suspected upper extremity deep venous thrombosis (UEDVT). OBJECTIVE To assess the safety and feasibility of a new diagnostic algorithm in patients with clinically suspected UEDVT. DESIGN Diagnostic management study. (ClinicalTrials.gov: NCT01324037) SETTING: 16 hospitals in Europe and the United States. PATIENTS 406 inpatients and outpatients with suspected UEDVT. MEASUREMENTS The algorithm consisted of the sequential application of a clinical decision score, d-dimer testing, and ultrasonography. Patients were first categorized as likely or unlikely to have UEDVT; in those with an unlikely score and normal d-dimer levels, UEDVT was excluded. All other patients had (repeated) compression ultrasonography. The primary outcome was the 3-month incidence of symptomatic UEDVT and pulmonary embolism in patients with a normal diagnostic work-up. RESULTS The algorithm was feasible and completed in 390 of the 406 patients (96%). In 87 patients (21%), an unlikely score combined with normal d-dimer levels excluded UEDVT. Superficial venous thrombosis and UEDVT were diagnosed in 54 (13%) and 103 (25%) patients, respectively. All 249 patients with a normal diagnostic work-up, including those with protocol violations (n = 16), were followed for 3 months. One patient developed UEDVT during follow-up, for an overall failure rate of 0.4% (95% CI, 0.0% to 2.2%). LIMITATIONS This study was not powered to show the safety of the substrategies. d-Dimer testing was done locally. CONCLUSION The combination of a clinical decision score, d-dimer testing, and ultrasonography can safely and effectively exclude UEDVT. If confirmed by other studies, this algorithm has potential as a standard approach to suspected UEDVT. PRIMARY FUNDING SOURCE None.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Recently, two simple clinical scores were published to predict survival in trauma patients. Both scores may successfully guide major trauma triage, but neither has been independently validated in a hospital setting. METHODS This is a cohort study with 30-day mortality as the primary outcome to validate two new trauma scores-Mechanism, Glasgow Coma Scale (GCS), Age, and Pressure (MGAP) score and GCS, Age and Pressure (GAP) score-using data from the UK Trauma Audit and Research Network. First, an assessment of discrimination, using the area under the receiver operating characteristic (ROC) curve, and calibration, comparing mortality rates with those originally published, were performed. Second, we calculated sensitivity, specificity, predictive values, and likelihood ratios for prognostic score performance. Third, we propose new cutoffs for the risk categories. RESULTS A total of 79,807 adult (≥16 years) major trauma patients (2000-2010) were included; 5,474 (6.9%) died. Mean (SD) age was 51.5 (22.4) years, median GCS score was 15 (interquartile range, 15-15), and median Injury Severity Score (ISS) was 9 (interquartile range, 9-16). More than 50% of the patients had a low-risk GAP or MGAP score (1% mortality). With regard to discrimination, areas under the ROC curve were 87.2% for GAP score (95% confidence interval, 86.7-87.7) and 86.8% for MGAP score (95% confidence interval, 86.2-87.3). With regard to calibration, 2,390 (3.3%), 1,900 (28.5%), and 1,184 (72.2%) patients died in the low, medium, and high GAP risk categories, respectively. In the low- and medium-risk groups, these were almost double the previously published rates. For MGAP, 1,861 (2.8%), 1,455 (15.2%), and 2,158 (58.6%) patients died in the low-, medium-, and high-risk categories, consonant with results originally published. Reclassifying score point cutoffs improved likelihood ratios, sensitivity and specificity, as well as areas under the ROC curve. CONCLUSION We found both scores to be valid triage tools to stratify emergency department patients, according to their risk of death. MGAP calibrated better, but GAP slightly improved discrimination. The newly proposed cutoffs better differentiate risk classification and may therefore facilitate hospital resource allocation. LEVEL OF EVIDENCE Prognostic study, level II.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES Because neural invasion (NI) is still inconsistently reported and not well characterized within gastrointestinal malignancies (GIMs), our aim was to determine the exact prevalence and severity of NI and to elucidate the true impact of NI on patient's prognosis. BACKGROUND The union internationale contre le cancer (UICC) recently added NI as a novel parameter in the current TNM classification. However, there are only a few existing studies with specific focus on NI, so that the distinct role of NI in GIMs is still uncertain. MATERIALS AND METHODS NI was characterized in approximately 16,000 hematoxylin and eosin tissue sections from 2050 patients with adenocarcinoma of the esophagogastric junction (AEG)-I-III, squamous cell carcinoma (SCC) of the esophagus, gastric cancer (GC), colon cancer (CC), rectal cancer (RC), cholangiocellular cancer (CCC), hepatocellular cancer (HCC), and pancreatic cancer (PC). NI prevalence and severity was determined and related to patient's prognosis and survival. RESULTS NI prevalence largely varied between HCC/6%, CC/28%, RC/34%, AEG-I/36% and AEG-II/36%, SCC/37%, GC/38%, CCC/58%, and AEG-III/65% to PC/100%. NI severity score was uppermost in PC (24.9±1.9) and lowest in AEG-I (0.8±0.3). Multivariable analyses including age, sex, TNM stage, and grading revealed that the prevalence of NI was significantly associated with diminished survival in AEG-II/III, GC, and RC. However, increasing NI severity impaired survival in AEG-II/III and PC only. CONCLUSIONS NI prevalence and NI severity strongly vary within GIMs. Determination of NI severity in GIMs is a more precise tool than solely recording the presence of NI and revealed dismal prognostic impact on patients with AEG-II/III and PC. Evidently, NI is not a concomitant side feature in GIMs and, therefore, deserves special attention for improved patient stratification and individualized therapy after surgery.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PRINCIPALS Over a million people worldwide die each year from road traffic injuries and more than 10 million sustain permanent disabilities. Many of these victims are pedestrians. The present retrospective study analyzes the severity and mortality of injuries suffered by adult pedestrians, depending on whether they used a zebra crosswalk. METHODS Our retrospective data analysis covered adult patients admitted to our emergency department (ED) between 1 January 2000 and 31 December 2012 after being hit by a vehicle while crossing the road as a pedestrian. Patients were identified by using a string term. Medical, police and ambulance records were reviewed for data extraction. RESULTS A total of 347 patients were eligible for study inclusion. Two hundred and three (203; 58.5%) patients were on a zebra crosswalk and 144 (41.5%) were not. The mean ISS (injury Severity Score) was 12.1 (SD 14.7, range 1-75). The vehicles were faster in non-zebra crosswalk accidents (47.7 km/n, versus 41.4 km/h, p<0.027). The mean ISS score was higher in patients with non-zebra crosswalk accidents; 14.4 (SD 16.5, range 1-75) versus 10.5 (SD13.14, range 1-75) (p<0.019). Zebra crosswalk accidents were associated with less risk of severe injury (OR 0.61, 95% CI 0.38-0.98, p<0.042). Accidents involving a truck were associated with increased risk of severe injury (OR 3.53, 95%CI 1.21-10.26, p<0.02). CONCLUSION Accidents on zebra crosswalks are more common than those not on zebra crosswalks. The injury severity of non-zebra crosswalk accidents is significantly higher than in patients with zebra crosswalk accidents. Accidents involving large vehicles are associated with increased risk of severe injury. Further prospective studies are needed, with detailed assessment of motor vehicle types and speed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Low vitamin D is implicated in various chronic pain conditions with, however, inconclusive findings. Vitamin D might play an important role in mechanisms being involved in central processing of evoked pain stimuli but less so for spontaneous clinical pain. OBJECTIVE This study aims to examine the relation between low serum levels of 25-hydroxyvitamin D3 (25-OH D) and mechanical pain sensitivity. DESIGN We studied 174 patients (mean age 48 years, 53% women) with chronic pain. A standardized pain provocation test was applied, and pain intensity was rated on a numerical analogue scale (0-10). The widespread pain index and symptom severity score (including fatigue, waking unrefreshed, and cognitive symptoms) following the 2010 American College of Rheumatology preliminary diagnostic criteria for fibromyalgia were also assessed. Serum 25-OH D levels were measured with a chemiluminescent immunoassay. RESULTS Vitamin deficiency (25-OH D < 50 nmol/L) was present in 71% of chronic pain patients; another 21% had insufficient vitamin D (25-OH D < 75 nmol/L). After adjustment for demographic and clinical variables, there was a mean ± standard error of the mean increase in pain intensity of 0.61 ± 0.25 for each 25 nmol/L decrease in 25-OH D (P = 0.011). Lower 25-OH D levels were also related to greater symptom severity (r = -0.21, P = 0.008) but not to the widespread pain index (P = 0.83) and fibromyalgia (P = 0.51). CONCLUSIONS The findings suggest a role of low vitamin D levels for heightened central sensitivity, particularly augmented pain processing upon mechanical stimulation in chronic pain patients. Vitamin D seems comparably less important for self-reports of spontaneous chronic pain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Catecholamine-O-methyl-tranferase (COMT) initiates dopamine degradation. Its activity is mainly determined by a single nucleotide polymorphism in the COMT gene (Val158Met, rs4680) separating high (Val/Val, COMT(HH)), intermediate (Val/Met, COMT(HL)) and low metabolizers (Met/Met, COMT(LL)). We investigated dopaminergic denervation in the striatum in PD patients according to COMT rs4680 genotype. METHODS Patients with idiopathic PD were assessed for motor severity (UPDRS-III rating scale in OFF-state), dopaminergic denervation using [123I]-FP-CIT SPECT imaging, and genotyped for the COMT rs4680 enzyme. [123I]-FP-CIT binding potential (BP) for each voxel was defined by the ratio of tracer-binding in the region of interest (striatum, caudate nucleus and putamen) to that in a region of non-specific activity. Genotyping was performed using TaqMan(®) SNP genotyping assay. We used a regression model to evaluate the effect of COMT genotype on the BP in the striatum and its sub-regions. RESULTS Genotype distribution was: 11 (27.5%) COMT(HH), 26 (65%) COMT(HL) and 3 (7.5%) COMT(LL). There were no significant differences in disease severity, treatments, or motor scores between genotypes. When adjusted to clinical severity, gender and age, low and intermediate metabolizers showed significantly higher rates of striatal denervation (COMT(HL+LL) BP = 1.32 ± 0.04) than high metabolizers (COMT(HH), BP = 1.6 ± 0.08; F(1.34) = 9.0, p = 0.005). Striatal sub-regions showed similar results. BP and UPDRS-III motor scores (r = 0.44, p = 0.04) (p < 0.001) were highly correlated. There was a gender effect, but no gender-genotype interaction. CONCLUSIONS Striatal denervation differs according to COMT-Val158Met polymorphism. COMT activity may play a role as a compensatory mechanism in PD motor symptoms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introdução: O tratamento da Insuficiência Venosa Crônica (IVC) é baseado na correção dos refluxos e obstruções ao fluxo sanguíneo venoso. A detecção, a gravidade e o tratamento dessas obstruções venosas, responsáveis pelos sinais e sintomas da IVC, têm sido recentemente estudados e melhor compreendidos. Estes estudos não definem qual o grau de obstrução significativa nem os critérios ultrassonográficos para sua detecção. O objetivo deste estudo foi determinar critérios ultrassonográficos para o diagnóstico das obstruções venosas ilíacas, avaliando a concordância deste método com o ultrassom intravascular (UI) em pacientes portadores de IVC avançada. Métodos: Foram avaliados 15 pacientes (30 membros; 49,4 ± 10,7 anos; 1 homem) com IVC inicial (Classificação Clínica-Etiológica-Anatômica-Physiopatológica - CEAP C1-2) no grupo I (GI) e 51 pacientes (102 membros; 50,53 ± 14,5 anos; 6 homens) com IVC avançada (CEAP C3-6) no grupo II (GII) pareados por sexo, idade e etnia. Todos pacientes foram submetidos à entrevista clínica e à ultrassonografia vascular com Doppler (UV-D), sendo obtidas as medidas de fasicidade de fluxo, os índices de fluxo e velocidades venosas femorais, e as relações de velocidade e de diâmetro da obstrução ilíaca. Foi analisado o escore de refluxo multisegmentar. Os indivíduos do GI foram avaliados por 3 examinadores independentes. Os pacientes do GII foram submetidos ao UI, sendo obtidos a área dos segmentos venosos comprometidos e comparados com os resultados obtidos pelo UV-D, agrupados em 3 categorias: obstruções < 50%; obstruções entre 50-79% e obstruções >= 80%. Resultados: A classe de severidade clinica CEAP predominante no GI foi C1 em 24/30 (80%) membros, e C3 em 54/102 (52,9%) membros no GII. O refluxo foi severo (escore de refluxo multisegmentar >= 3) em 3/30 (10%) membros no grupo I, e em 45/102 (44,1%) membros no grupo II (p<0,001). Houve uma concordância moderadamente elevada entre o UV-D e o UI, quando agrupadas em 3 categorias (K=0,598; p<0,001), e uma concordância elevada quando agrupadas em 2 categorias (obstruções <50% e >= 50%) (K= 0,784; p<0,001). Os melhores pontos de corte e sua correlação com o UI foram: índice de velocidade (0,9; r=-0,634; p<0,001); índice de fluxo (0,7; r=-0,623; p<0,001); relação de obstrução (0,5; r=0,750; p<0,001); relação de velocidade (2,5; r= 0,790; p<0,001); A ausência de fasicidade de fluxo esteve presente em 88,2% dos pacientes com obstrução >=80% ao UV-D. Foi construído um algoritmo ultrassonográfico vascular, utilizando as medidas e os pontos de corte descritos obtendo-se uma acurácia de 79,6% para 3 categorias (K=0,655; p<0,001) e de 86,7% para 2 categorias (k=0,730; p<0,001). Conclusões: O UV-D apresentou uma concordância elevada com o UI na detecção de obstruções >= 50%. A relação de velocidade na obstrução >= 2,5 é o melhor critério para detecção de obstruções venosas significativas em veias ilíacas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A redução da mortalidade é um objetivo fundamental das unidades de terapia intensiva pediátrica (UTIP). O estágio de gravidade da doença reflete a magnitude das comorbidades e distúrbios fisiológicos no momento da internação e pode ser avaliada pelos escores prognósticos de mortalidade. Os dois principais escores utilizados na UTIP são o Pediatric Risk of Mortality (PRISM) e o Pediatric Index of Mortality (PIM). O PRISM utiliza os piores valores de variáveis fisiológicas e laboratoriais nas primeiras 24 horas de internação enquanto o PIM2 utiliza dados da primeira hora de internação na UTIP e apenas uma gasometria arterial. Não há consenso na literatura, entre PRISM e PIM2, quanto à utilidade e padronização na admissão na terapia intensiva para as crianças e adolescentes, principalmente em uma UTI de nível de atendimento terciário. O objetivo do estudo foi estabelecer o escore de melhor performance na avaliação do prognóstico de mortalidade que seja facilmente aplicável na rotina da UTIP, para ser utilizado de forma padronizada e contínua. Foi realizado um estudo retrospectivo onde foram revisados os escores PRISM e PIM2 de 359 pacientes internados na unidade de terapia intensiva pediátrica do Instituto da Criança do Hospital das Clínicas da Faculdade de Medicina da USP, considerada uma unidade de atendimento de nível terciário. A mortalidade foi de 15%, o principal tipo de admissão foi clinico (78%) sendo a principal causa de internação a disfunção respiratória (37,3%). Os escores dos pacientes que foram a óbito mostraram-se maiores do que o dos sobreviventes. Para o PRISM foi 15 versus 7 (p = 0,0001) e para o PIM2, 11 versus 5 (p = 0,0002), respectivamente. Para a amostra geral, o Standardized Mortality Ratio (SMR) subestimou a mortalidade tanto para o PIM2 quanto para o PRISM [1,15 (0,84 - 1,46) e 1,67 (1,23 - 2,11), respectivamente]. O teste de Hosmer-Lemeshow mostrou calibração adequada para ambos os escores [x2 = 12,96 (p = 0,11) para o PRISM e x2 = 13,7 (p = 0,09) para o PIM2]. A discriminação, realizada por meio da área sob a curva ROC, foi mais adequada para o PRISM do que para o PIM2 [0,76 (IC 95% 0,69 - 0,83) e 0,65 (IC 95% 0,57 - 0,72), respectivamente, p= 0,002]. No presente estudo, a melhor sensibilidade e especificidade para o risco de óbito do PRISM foi um escore entre 13 e 14, mostrando que, com o avanço tecnológico, o paciente precisa ter um escore mais elevado, ou seja, maior gravidade clínica do que a população original, para um maior risco de mortalidade. Os escores de gravidade podem ter seus resultados modificados em consequência: do sistema de saúde (público ou privado), da infraestrutura da UTIP (número de leitos, recursos humanos, parque tecnológico) e indicação da internação. A escolha de um escore de gravidade depende das características individuais da UTIP, como o tempo de espera na emergência, presença de doença crônica complexa (por exemplo, pacientes oncológicos) e como é realizado o transporte para a UTIP. Idealmente, estudos multicêntricos têm maior significância estatística. No entanto, estudos com populações maiores e mais homogêneas, especialmente nos países em desenvolvimento, são difíceis de serem realizados

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014