968 resultados para score corporal
Resumo:
Hoy en día la apariencia física juega un papel muy importante en nuestra sociedad, siendo considerado por muchos como un instrumento básico para alcanzar el éxito social y laboral. Sin embargo, el concepto de imagen corporal, a veces confundido con el de apariencia física, es un término que significa el sentimiento que cada persona tiene en relación a su propio cuerpo. Por ello, cualquier alteración en esta imagen influye en la autoestima de las personas comportando un gran impacto psicológico y emocional y colocando a la persona en una situación de crisis con una alta vulnerabilidad psicológica. Entre las diversas causas que pueden ocasionar dicha alteración, destacan la cirugía, las incapacidades y un amplio abanico de enfermedades degenerativas. Si bien todas tienen su importancia, en el siguiente trabajo se han querido destacar aquellas enfermedades consideradas como “raras” debido a su baja prevalencia y por tanto, con gran desconocimiento por parte de la sociedad. Además, otra de las características de estas enfermedades es que más que alterar el aspecto físico, lo que se produce es una separación entre el cuerpo y la mente, provocando sentimientos de gran incertidumbre debido al desconocimiento acerca de cuál será el siguiente aspecto de su vida que se verá cambiado debido a su enfermedad. De todas ellas, es la esclerosis lateral amiotrófica o ELA la escogida como motivo principal de estudio; es una enfermedad neurodegenerativa minoritaria, poco conocida, de causa desconocida y que actualmente no tiene cura por lo que genera a los pacientes y a las familias sentimientos de soledad, desamparo y exclusión social y económica. La enfermedad evoluciona rápidamente provocando parálisis generalizada y afectando por tanto, a la movilidad, al habla, la deglución, la respiración y al grado de dependencia; por lo tanto, el paciente necesitará un cuidador las 24 horas del día. Por el contrario, los sentidos y la capacidad cognitiva no se ven afectados de modo que el paciente es consciente en todo momento de la evolución de su enfermedad y de la pérdida progresiva de sus funciones.
Resumo:
The predictive potential of six selected factors was assessed in 72 patients with primary myelodysplastic syndrome using univariate and multivariate logistic regression analysis of survival at 18 months. Factors were age (above median of 69 years), dysplastic features in the three myeloid bone marrow cell lineages, presence of chromosome defects, all metaphases abnormal, double or complex chromosome defects (C23), and a Bournemouth score of 2, 3, or 4 (B234). In the multivariate approach, B234 and C23 proved to be significantly associated with a reduction in the survival probability. The similarity of the regression coefficients associated with these two factors means that they have about the same weight. Consequently, the model was simplified by counting the number of factors (0, 1, or 2) present in each patient, thus generating a scoring system called the Lausanne-Bournemouth score (LB score). The LB score combines the well-recognized and easy-to-use Bournemouth score (B score) with the chromosome defect complexity, C23 constituting an additional indicator of patient outcome. The predicted risk of death within 18 months calculated from the model is as follows: 7.1% (confidence interval: 1.7-24.8) for patients with an LB score of 0, 60.1% (44.7-73.8) for an LB score of 1, and 96.8% (84.5-99.4) for an LB score of 2. The scoring system presented here has several interesting features. The LB score may improve the predictive value of the B score, as it is able to recognize two prognostic groups in the intermediate risk category of patients with B scores of 2 or 3. It has also the ability to identify two distinct prognostic subclasses among RAEB and possibly CMML patients. In addition to its above-described usefulness in the prognostic evaluation, the LB score may bring new insights into the understanding of evolution patterns in MDS. We used the combination of the B score and chromosome complexity to define four classes which may be considered four possible states of myelodysplasia and which describe two distinct evolutional pathways.
Resumo:
BACKGROUND AND PURPOSE: Beyond the Framingham Stroke Risk Score, prediction of future stroke may improve with a genetic risk score (GRS) based on single-nucleotide polymorphisms associated with stroke and its risk factors. METHODS: The study includes 4 population-based cohorts with 2047 first incident strokes from 22,720 initially stroke-free European origin participants aged ≥55 years, who were followed for up to 20 years. GRSs were constructed with 324 single-nucleotide polymorphisms implicated in stroke and 9 risk factors. The association of the GRS to first incident stroke was tested using Cox regression; the GRS predictive properties were assessed with area under the curve statistics comparing the GRS with age and sex, Framingham Stroke Risk Score models, and reclassification statistics. These analyses were performed per cohort and in a meta-analysis of pooled data. Replication was sought in a case-control study of ischemic stroke. RESULTS: In the meta-analysis, adding the GRS to the Framingham Stroke Risk Score, age and sex model resulted in a significant improvement in discrimination (all stroke: Δjoint area under the curve=0.016, P=2.3×10(-6); ischemic stroke: Δjoint area under the curve=0.021, P=3.7×10(-7)), although the overall area under the curve remained low. In all the studies, there was a highly significantly improved net reclassification index (P<10(-4)). CONCLUSIONS: The single-nucleotide polymorphisms associated with stroke and its risk factors result only in a small improvement in prediction of future stroke compared with the classical epidemiological risk factors for stroke.
Resumo:
The trabecular bone score (TBS) is an index of bone microarchitectural texture calculated from anteroposterior dual-energy X-ray absorptiometry (DXA) scans of the lumbar spine (LS) that predicts fracture risk, independent of bone mineral density (BMD). The aim of this study was to compare the effects of yearly intravenous zoledronate (ZOL) versus placebo (PLB) on LS BMD and TBS in postmenopausal women with osteoporosis. Changes in TBS were assessed in the subset of 107 patients recruited at the Department of Osteoporosis of the University Hospital of Berne, Switzerland, who were included in the HORIZON trial. All subjects received adequate calcium and vitamin D3. In these patients randomly assigned to either ZOL (n = 54) or PLB (n = 53) for 3 years, BMD was measured by DXA and TBS assessed by TBS iNsight (v1.9) at baseline and 6, 12, 24, and 36 months after treatment initiation. Baseline characteristics (mean ± SD) were similar between groups in terms of age, 76.8 ± 5.0 years; body mass index (BMI), 24.5 ± 3.6 kg/m(2) ; TBS, 1.178 ± 0.1 but for LS T-score (ZOL-2.9 ± 1.5 versus PLB-2.1 ± 1.5). Changes in LS BMD were significantly greater with ZOL than with PLB at all time points (p < 0.0001 for all), reaching +9.58% versus +1.38% at month 36. Change in TBS was significantly greater with ZOL than with PLB as of month 24, reaching +1.41 versus-0.49% at month 36; p = 0.031, respectively. LS BMD and TBS were weakly correlated (r = 0.20) and there were no correlations between changes in BMD and TBS from baseline at any visit. In postmenopausal women with osteoporosis, once-yearly intravenous ZOL therapy significantly increased LS BMD relative to PLB over 3 years and TBS as of 2 years. © 2013 American Society for Bone and Mineral Research.
Resumo:
INTRODUCTION: Ventilator-associated pneumonia remains the most common nosocomial infection in the critically ill and contributes to significant morbidity. Eventual decisions regarding withdrawal or maximal therapy are demanding and rely on physicians' experience. Additional objective tools for risk assessment may improve medical judgement. Copeptin, reflecting vasopressin release, as well as the Sequential Organ Failure Assessment (SOFA) score, reflecting the individual degree of organ dysfunction, might qualify for survival prediction in ventilator-associated pneumonia. We investigated the predictive value of the SOFA score and copeptin in ventilator-associated pneumonia. METHODS: One hundred one patients with ventilator-associated pneumonia were prospectively assessed. Death within 28 days after ventilator-associated pneumonia onset was the primary end point. RESULTS: The SOFA score and the copeptin levels at ventilator-associated pneumonia onset were significantly elevated in nonsurvivors (P = .002 and P = .017, respectively). Both markers had different time courses in survivors and nonsurvivors (P < .001 and P = .006). Mean SOFA (average SOFA of 10 days after VAP onset) was superior in predicting 28-day survival as compared with SOFA and copeptin at ventilator-associated pneumonia onset (area under the curve, 0.90 vs 0.73 and 0.67, respectively). CONCLUSIONS: The predictive value of serial-measured SOFA significantly exceeds those of single SOFA and copeptin measurements. Serial SOFA scores accurately predict outcome in ventilator-associated pneumonia.
Resumo:
OBJECTIVE: Best long-term practice in primary HIV-1 infection (PHI) remains unknown for the individual. A risk-based scoring system associated with surrogate markers of HIV-1 disease progression could be helpful to stratify patients with PHI at highest risk for HIV-1 disease progression. METHODS: We prospectively enrolled 290 individuals with well-documented PHI in the Zurich Primary HIV-1 Infection Study, an open-label, non-randomized, observational, single-center study. Patients could choose to undergo early antiretroviral treatment (eART) and stop it after one year of undetectable viremia, to go on with treatment indefinitely, or to defer treatment. For each patient we calculated an a priori defined "Acute Retroviral Syndrome Severity Score" (ARSSS), consisting of clinical and basic laboratory variables, ranging from zero to ten points. We used linear regression models to assess the association between ARSSS and log baseline viral load (VL), baseline CD4+ cell count, and log viral setpoint (sVL) (i.e. VL measured ≥90 days after infection or treatment interruption). RESULTS: Mean ARSSS was 2.89. CD4+ cell count at baseline was negatively correlated with ARSSS (p = 0.03, n = 289), whereas HIV-RNA levels at baseline showed a strong positive correlation with ARSSS (p<0.001, n = 290). In the regression models, a 1-point increase in the score corresponded to a 0.10 log increase in baseline VL and a CD4+cell count decline of 12/µl, respectively. In patients with PHI and not undergoing eART, higher ARSSS were significantly associated with higher sVL (p = 0.029, n = 64). In contrast, in patients undergoing eART with subsequent structured treatment interruption, no correlation was found between sVL and ARSSS (p = 0.28, n = 40). CONCLUSION: The ARSSS is a simple clinical score that correlates with the best-validated surrogate markers of HIV-1 disease progression. In regions where ART is not universally available and eART is not standard this score may help identifying patients who will profit the most from early antiretroviral therapy.
Resumo:
The trabecular bone score (TBS) is a new parameter that is determined from gray-level analysis of dual-energy X-ray absorptiometry (DXA) images. It relies on the mean thickness and volume fraction of trabecular bone microarchitecture. This was a preliminary case-control study to evaluate the potential diagnostic value of TBS as a complement to bone mineral density (BMD), by comparing postmenopausal women with and without fractures. The sample consisted of 45 women with osteoporotic fractures (5 hip fractures, 20 vertebral fractures, and 20 other types of fracture) and 155 women without a fracture. Stratification was performed, taking into account each type of fracture (except hip), and women with and without fractures were matched for age and spine BMD. BMD and TBS were measured at the total spine. TBS measured at the total spine revealed a significant difference between the fracture and age- and spine BMD-matched nonfracture group, when considering all types of fractures and vertebral fractures. In these cases, the diagnostic value of the combination of BMD and TBS likely will be higher compared with that of BMD alone. TBS, as evaluated from standard DXA scans directly, potentially complements BMD in the detection of osteoporotic fractures. Prospective studies are necessary to fully evaluate the potential role of TBS as a complementary risk factor for fracture.
Resumo:
Developing a novel technique for the efficient, noninvasive clinical evaluation of bone microarchitecture remains both crucial and challenging. The trabecular bone score (TBS) is a new gray-level texture measurement that is applicable to dual-energy X-ray absorptiometry (DXA) images. Significant correlations between TBS and standard 3-dimensional (3D) parameters of bone microarchitecture have been obtained using a numerical simulation approach. The main objective of this study was to empirically evaluate such correlations in anteroposterior spine DXA images. Thirty dried human cadaver vertebrae were evaluated. Micro-computed tomography acquisitions of the bone pieces were obtained at an isotropic resolution of 93μm. Standard parameters of bone microarchitecture were evaluated in a defined region within the vertebral body, excluding cortical bone. The bone pieces were measured on a Prodigy DXA system (GE Medical-Lunar, Madison, WI), using a custom-made positioning device and experimental setup. Significant correlations were detected between TBS and 3D parameters of bone microarchitecture, mostly independent of any correlation between TBS and bone mineral density (BMD). The greatest correlation was between TBS and connectivity density, with TBS explaining roughly 67.2% of the variance. Based on multivariate linear regression modeling, we have established a model to allow for the interpretation of the relationship between TBS and 3D bone microarchitecture parameters. This model indicates that TBS adds greater value and power of differentiation between samples with similar BMDs but different bone microarchitectures. It has been shown that it is possible to estimate bone microarchitecture status derived from DXA imaging using TBS.
Resumo:
Purpose: SIOPEN scoring of 123I mIBG imaging has been shown to predict response to induction chemotherapy and outcome at diagnosis in children with HRN.Method: Patterns of skeletal 123I mIBG uptake were assigned numerical scores (Mscore) ranging from 0 (no metastasis) to 72 (diffuse metastases) within 12 body areas as described previously. 271 anonymised, paired image data sets acquired at diagnosis and on completion of Rapid COJEC induction chemotherapy were reviewed, constituting a representative sample of 1602 children treated prospectively within the HR-NBL1/SIOPEN trial. Pre-and post-treatment Mscores were compared with bone marrow cytology (BM) and 3 year event free survival (EFS).Results: Results 224/271 patients showed skeletal MIBG-uptake at diagnosis and were evaluable forMIBG-response. Complete response (CR) on MIBG to Rapid COJEC induction was achieved by 66%, 34% and 15% of patients who had pre-treatment Mscores of <18 (n¼65, 29%), 18-44 (n¼95,42%) and Y ´ 45 (n¼64, 28.5%) respectively (chi squared test p<.0001). Mscore at diagnosis and on completion of Rapid COJEC correlated strongly with BM involvement (p<0.0001). The correlation of pre score with post scores and response was highly significant (p<0.001). Most importantly, the 3 year EFS in 47 children with Mscore 0 at diagnosis was 0.68 (A ` 0.07), by comparison with 0.42 (A` 0.06), 0.35 (A` 0.05) and 0.25 (A` 0.06) for patients in pre-treatment score groups <18, 18-44 and Y ´ 45, respectively (p<0.001). AnMscore threshold ofY ´ 45 at diagnosis was associated with significantly worse outcome by comparison with all other Mscore groups (p¼0.029). The 3 year EFS of 0.53 (A` 0.07) of patients in metastatic CR (mIBG and BM) after Rapid Cojec (33%) is clearly superior to patients not achieving metastatic CR (0.24 (A ` 0.04), p¼0.005).Conclusion: SIOPEN scoring of 123I mIBG imaging has been shown to predict response to induction chemotherapy and outcome at diagnosis in children with HRN.
Resumo:
Age-related changes in lumbar vertebral microarchitecture are evaluated, as assessed by trabecular bone score (TBS), in a cohort of 5,942 French women. The magnitude of TBS decline between 45 and 85 years of age is piecewise linear in the spine and averaged 14.5 %. TBS decline rate increases after 65 years by 50 %. INTRODUCTION: This study aimed to evaluate age-related changes in lumbar vertebral microarchitecture, as assessed by TBS, in a cohort of French women aged 45-85 years. METHODS: An all-comers cohort of French Caucasian women was selected from two clinical centers. Data obtained from these centers were cross-calibrated for TBS and bone mineral density (BMD). BMD and TBS were evaluated at L1-L4 and for all lumbar vertebrae combined using GE-Lunar Prodigy densitometer images. Weight, height, and body mass index (BMI) also were determined. To validate our all-comers cohort, the BMD normative data of our cohort and French Prodigy data were compared. RESULTS: A cohort of 5,942 French women aged 45 to 85 years was created. Dual-energy X-ray absorptiometry normative data obtained for BMD from this cohort were not significantly different from French prodigy normative data (p = 0.15). TBS values at L1-L4 were poorly correlated with BMI (r = -0.17) and weight (r = -0.14) and not correlated with height. TBS values obtained for all lumbar vertebra combined (L1, L2, L3, L4) decreased with age. The magnitude of TBS decline at L1-L4 between 45 and 85 years of age was piecewise linear in the spine and averaged 14.5 %, but this rate increased after 65 years by 50 %. Similar results were obtained for other region of interest in the lumbar spine. As opposed to BMD, TBS was not affected by spinal osteoarthrosis. CONCLUSION: The age-specific reference curve for TBS generated here could therefore be used to help clinicians to improve osteoporosis patient management and to monitor microarchitectural changes related to treatment or other diseases in routine clinical practice.
Resumo:
BACKGROUND AND PURPOSE: The DRAGON score predicts functional outcome in the hyperacute phase of intravenous thrombolysis treatment of ischemic stroke patients. We aimed to validate the score in a large multicenter cohort in anterior and posterior circulation. METHODS: Prospectively collected data of consecutive ischemic stroke patients who received intravenous thrombolysis in 12 stroke centers were merged (n=5471). We excluded patients lacking data necessary to calculate the score and patients with missing 3-month modified Rankin scale scores. The final cohort comprised 4519 eligible patients. We assessed the performance of the DRAGON score with area under the receiver operating characteristic curve in the whole cohort for both good (modified Rankin scale score, 0-2) and miserable (modified Rankin scale score, 5-6) outcomes. RESULTS: Area under the receiver operating characteristic curve was 0.84 (0.82-0.85) for miserable outcome and 0.82 (0.80-0.83) for good outcome. Proportions of patients with good outcome were 96%, 93%, 78%, and 0% for 0 to 1, 2, 3, and 8 to 10 score points, respectively. Proportions of patients with miserable outcome were 0%, 2%, 4%, 89%, and 97% for 0 to 1, 2, 3, 8, and 9 to 10 points, respectively. When tested separately for anterior and posterior circulation, there was no difference in performance (P=0.55); areas under the receiver operating characteristic curve were 0.84 (0.83-0.86) and 0.82 (0.78-0.87), respectively. No sex-related difference in performance was observed (P=0.25). CONCLUSIONS: The DRAGON score showed very good performance in the large merged cohort in both anterior and posterior circulation strokes. The DRAGON score provides rapid estimation of patient prognosis and supports clinical decision-making in the hyperacute phase of stroke care (eg, when invasive add-on strategies are considered).
Resumo:
BACKGROUND: No prior studies have identified which patients with deep vein thrombosis in the lower limbs are at a low risk for adverse events within the first week of therapy. METHODS: We used data from the Registro Informatizado de la Enfermedad TromboEmbólica (RIETE) to identify patients at low risk for the composite outcome of pulmonary embolism, major bleeding, or death within the first week. We built a prognostic score and compared it with the decision to treat patients at home. RESULTS: As of December 2013, 15,280 outpatients with deep vein thrombosis had been enrolled. Overall, 5164 patients (34%) were treated at home. Of these, 12 (0.23%) had pulmonary embolism, 8 (0.15%) bled, and 4 (0.08%) died. On multivariable analysis, chronic heart failure, recent immobility, recent bleeding, cancer, renal insufficiency, and abnormal platelet count independently predicted the risk for the composite outcome. Among 11,430 patients (75%) considered to be at low risk, 15 (0.13%) suffered pulmonary embolism, 22 (0.19%) bled, and 8 (0.07%) died. The C-statistic was 0.61 (95% confidence interval [CI], 0.57-0.65) for the decision to treat patients at home and 0.76 (95% CI, 0.72-0.79) for the score (P = .003). Net reclassification improvement was 41% (P < .001). Integrated discrimination improvement was 0.034 for the score and 0.015 for the clinical decision (P < .001). CONCLUSIONS: Using 6 easily available variables, we identified outpatients with deep vein thrombosis at low risk for adverse events within the first week. These data may help to safely treat more patients at home. This score, however, should be validated.
Resumo:
Background: A patient's chest pain raises concern for the possibility of coronary heart disease (CHD). An easy to use clinical prediction rule has been derived from the TOPIC study in Lausanne. Our objective is to validate this clinical score for ruling out CHD in primary care patients with chest pain. Methods: This secondary analysis used data collected from a oneyear follow-up cohort study attending 76 GPs in Germany. Patients attending their GP with chest pain were questioned on their age, gender, duration of chest pain (1-60 min), sternal pain location, pain increases with exertion, absence of tenderness point at palpation, cardiovascular risks factors, and personal history of cardiovascular disease. Area under the curve (ROC), sensitivity and specificity of the Lausanne CHD score were calculated for patients with full data. Results: 1190 patients were included. Full data was available for 509 patients (42.8%). Missing data was not related to having CHD (p = 0.397) or having a cardiovascular risk factor (p = 0.275). 76 (14.9%) were diagnosed with a CHD. Prevalence of CHD were respectively of 68/344 (19.8%), 2/62 (3.2%), 6/103 (5.8%) in the high, intermediate and low risk category. ROC was of 72.9 (CI95% 66.8; 78.9). Ruling out patients with low risk has a sensitivity of 92.1% (CI95% 83.0; 96.7) and a specificity of 22.4% (CI95% 18.6%; 26.7%). Conclusion: The Lausanne CHD score shows reasonably good sensitivity and can be used to rule out coronary events in patients with chest pain. Patients at risk of CHD for other rarer reasons should nevertheless also be investigated.
Resumo:
1.1 Objectifs Le syndrome de bas débit cardiaque est une appréciation clinique (Hoffman et al.) dont les signes sont peu sensibles, peu spécifiques, souvent tardifs et dont la mesure chez l'enfant est rendue impossible en clinique pour des raisons techniques et de fiabilités des systèmes de mesures. Aucun score n'est applicable à l'heure actuelle chez l'enfant. C'est pourquoi, ces 10 dernières années, de nombreuses équipes se sont penchée activement sur ce problème afin de définir des marqueurs performants prédisant la survenue du LCOS après chirurgie cardiaque chez l'enfant. Cette étude s'est attachée à réunir ses marqueurs cardiaques, de les combiner et de les inclure dans un score de bas débit cardiaque. 1.2 Méthode Enfants et nouveau-nés ayant subit une chirurgie cardiaque au CHUV, après malformation cardiaque congénitale, entre janvier 2010 et octobre 2011 (N=48). Age : 8 jours à 13 ans (médiane : 16.3 mois). Deux scores développés. Soumission à l'aveugle de la liste des patients à un comité d'expert pour identifier les patients en LCOS à 48h post-chirurgie, puis comparaison avec le résultat du score. Les paramètres du premier score (SCORE 1), sont agendées de manière ordinales, alors que dans le deuxième score (SCORE 2) elles le sont de manière dichotomiques. Valeurs cut-off supérieures et inférieures des scores choisies selon une recherche extensive dans la littérature. Les valeurs cut-off intermédiaires (SCORE 1) ont été choisies au hasard. 1.3 Résultats La régression logistique multivariée pour la prédiction d'un LCOS à 48h, démontre que seul le score d'amine durant les 24 premières heures et un prédicteur indépendant de LCOS (OR 16.6 [2.6- 105.5] p<0.0001). Ce paramètre est bien corrélé avec le résultat des experts avec un coefficient de corrélation r=0.57 (p<0.0001). Les spécificités des deux scores (AUC=0.78 (p<0.0001) respectivement AUC=0.81 (p<0.0001)) sont de 71% respectivement 93.5%, les sensibilités de 70.6% respectivement 41.2 %, VPP de 57.1% respectivement 77.8%, VPN de 81.5 % respectivement 74.4%. Les tests du khi2 valent 7.7 (p=0.006) respectivement 8.69 (p=003), rejettent l'hypothèse nulle d'indépendance entre le résultat des experts et celui prédit par le score. 1.4 Conclusions Les scores développés dans le cadre de cette étude ne montrent pas une corrélation significative avec l'apparition d'un bas débit cardiaque. Même si le choix des paramètres permettant de quantifier l'apparition d'un bas débit cardiaque à 48h fût réalisé selon une recherche extensive dans la littérature, le design rétrospectif de l'étude, n'a pas permit de vérifier efficacement la relation entre l'apparition d'un bas débit cardiaque et le score de bas débit cardiaque.