48 resultados para PRUEBAS GENÉTICAS
Resumo:
BACKGROUND New biomarkers are needed for the prognosis of advanced colorectal cancer, which remains incurable by conventional treatments. O6-methylguanine DNA methyltransferase (MGMT) methylation and protein expression have been related to colorectal cancer treatment failure and tumor progression. Moreover, the presence in these tumors of cancer stem cells, which are characterized by CD133 expression, has been associated with chemoresistance, radioresistance, metastasis, and local recurrence. The objective of this study was to determine the prognostic value of CD133 and MGMT and their possible interaction in colorectal cancer patients. METHODS MGMT and CD133 expression was analyzed by immunohistochemistry in 123 paraffin-embedded colorectal adenocarcinoma samples, obtaining the percentage staining and intensity. MGMT promoter methylation status was obtained by using bisulfite modification and methylation-specific PCR (MSP). These values were correlated with clinical data, including overall survival (OS), disease-free survival (DFS), tumor stage, and differentiation grade. RESULTS Low MGMT expression intensity was significantly correlated with shorter OS and was a prognostic factor independently of treatment and histopathological variables. High percentage of CD133 expression was significantly correlated with shorter DFS but was not an independent factor. Patients with low-intensity MGMT expression and ≥50% CD133 expression had the poorest DFS and OS outcomes. CONCLUSIONS Our results support the hypothesis that MGMT expression may be an OS biomarker as useful as tumor stage or differentiation grade and that CD133 expression may be a predictive biomarker of DFS. Thus, MGMT and CD133 may both be useful for determining the prognosis of colorectal cancer patients and to identify those requiring more aggressive adjuvant therapies. Future studies will be necessary to determine its clinical utility.
Resumo:
Introduct ion The Surviving Sepsis Campaign (SSC) indicates that a lactate (LT) concentration greater than 4ımmol/l indicates early resuscitation bundles. However, several recent studies have suggested that LT values lower than 4ımmol/l may be a prognostic marker of adverse outcome. The aim of this study was to identify clinical and analytical prognostic parameters in severe sepsis (SS) or septic shock (ShS) according to quartiles of blood LT concentration. Methods A cohort study was designed in a polyvalent ICU. We studied demographic, clinical and analytical parameters in 148 critically ill adults, within 24ıhours from SS or ShS onset according to SSC criteria. We tested for diı erences in baseline characteristics by lactate interval using a KruskalıWallis test for continuous data or a chi-square test for categorical data and reported the median and interquartile ranges; SPSS version 15.0 (SPSS Inc., Chicago, IL, USA). Results We analyzed 148 consecutive episodes of SS (16%) or ShS (84%). The median age was 64 (interquartile range, 48.7 to 71)ıyears; male: 60%. The main sources of infection were respiratory tract 38% and intra-abdomen 45%; 70.7% had medical pathology. Mortality at 28ıdays was 22.7%. Quartiles of blood LT concentration were quartile 1 (Q1): 1.87ımmol/l or less, quartile 2 (Q2): 1.88 to 2.69ımmol/l, quartile 3 (Q3): 2.7 to 4.06ımmol/l, and quartile 4 (Q4): 4.07ımmol/l or greater (Tableı1). The median LT concentrations of each quartile were 1.43 (Q1), 2.2 (Q2), 3.34 (Q3), and 5.1 (Q4) mmol/l (Pı<0.001). The diı erences between these quartiles were that the patients in Q1 had signiı cantly lower APACHE II scores (Pı=ı0.04), SOFA score (Pı=ı0.024), number of organ failures (NOF) (Pı<0.001) and ICU mortality (Pı=ı0.028), compared with patients in Q2, Q3 and Q4. Patients in Q1 had signiı cantly higher cholesterol (Pı=ı0.06) and lower procalcitonin (Pı=ı0.05) at enrolment. At the extremes, patients in Q1 had decreased 28-day mortality (Pı=ı0.023) and, patients in Q4 had increased 28-day mortality, compared with the other quartiles of patients (Pı=ı0.009). Interestingly, patients in Q2 had signiı cant increased mortality compared with patients in Q1 (Pı=ı0.043), whereas the patients in Q2 had no signiı cant diı erence in 28-day mortality compared with patients in Q3. Conclusion Adverse outcomes and several potential risk factors, including organ failure, are signiı cantly associated with higher quartiles of LT concentrations. It may be useful to revise the cutoı value of lactate according to the SSC (4 mmol/l).
Resumo:
Introduction Activated protein C (APC) deC ciency is prevalent in severe sepsis and septic shock patients. The aim of the study was to relate the anticoagulation activity evaluated by APC with other coagulation
parameters adjusted to 28-day mortality.
Methods A cohort study of 150 critically ill adults. Age, sex, sources of infection and coagulation markers within 24< hours from severe sepsis or septic shock onset, deC ned according to Surviving Sepsis Campaign (SSC) criteria, were studied. We analyzed APC activity using a hemostasis laboratory analyzer (BCS® XP; Siemens). A descriptive and comparative statistical analysis was performed using SPSS version 15.0 (SPSS Inc., Chicago, IL, USA).
Results We analyzed 150 consecutive episodes of severe sepsis (16%) or septic shock (84%) admitted to the UCI. The median age of the study sample was 64 (interquartile range (IQR): 22.3
Resumo:
OBJECTIVE Delusional disorder has been traditionally considered a psychotic syndrome that does not evolve to cognitive deterioration. However, to date, very little empirical research has been done to explore cognitive executive components and memory processes in Delusional Disorder patients. This study will investigate whether patients with delusional disorder are intact in both executive function components (such as flexibility, impulsivity and updating components) and memory processes (such as immediate, short term and long term recall, learning and recognition). METHODS A large sample of patients with delusional disorder (n = 86) and a group of healthy controls (n = 343) were compared with regard to their performance in a broad battery of neuropsychological tests including Trail Making Test, Wisconsin Card Sorting Test, Colour-Word Stroop Test, and Complutense Verbal Learning Test (TAVEC). RESULTS When compared to controls, cases of delusional disorder showed a significantly poorer performance in most cognitive tests. Thus, we demonstrate deficits in flexibility, impulsivity and updating components of executive functions as well as in memory processes. These findings held significant after taking into account sex, age, educational level and premorbid IQ. CONCLUSIONS Our results do not support the traditional notion of patients with delusional disorder being cognitively intact.
Resumo:
We present the case of a patient with an echocardiographic diagnosis of suspected multiple rhabdomyomas with spontaneous, complete early remission.
Resumo:
BACKGROUND Breast cancer survivors suffer physical impairment after oncology treatment. This impairment reduces quality of life (QoL) and increase the prevalence of handicaps associated to unhealthy lifestyle (for example, decreased aerobic capacity and strength, weight gain, and fatigue). Recent work has shown that exercise adapted to individual characteristics of patients is related to improved overall and disease-free survival. Nowadays, technological support using telerehabilitation systems is a promising strategy with great advantage of a quick and efficient contact with the health professional. It is not known the role of telerehabilitation through therapeutic exercise as a support tool to implement an active lifestyle which has been shown as an effective resource to improve fitness and reduce musculoskeletal disorders of these women. METHODS / DESIGN This study will use a two-arm, assessor blinded, parallel randomized controlled trial design. People will be eligible if: their diagnosis is of stages I, II, or IIIA breast cancer; they are without chronic disease or orthopedic issues that would interfere with ability to participate in a physical activity program; they had access to the Internet and basic knowledge of computer use or living with a relative who has this knowledge; they had completed adjuvant therapy except for hormone therapy and not have a history of cancer recurrence; and they have an interest in improving lifestyle. Participants will be randomized into e-CUIDATE or usual care groups. E-CUIDATE give participants access to a range of contents: planning exercise arranged in series with breathing exercises, mobility, strength, and stretching. All of these exercises will be assigned to women in the telerehabilitation group according to perceived needs. The control group will be asked to maintain their usual routine. Study endpoints will be assessed after 8 weeks (immediate effects) and after 6 months. The primary outcome will be QoL measured by The European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 version 3.0 and breast module called The European Organization for Research and Treatment of Cancer Breast Cancer-Specific Quality of Life questionnaire. The secondary outcomes: pain (algometry, Visual Analogue Scale, Brief Pain Inventory short form); body composition; physical measurement (abdominal test, handgrip strength, back muscle strength, and multiple sit-to-stand test); cardiorespiratory fitness (International Fitness Scale, 6-minute walk test, International Physical Activity Questionnaire-Short Form); fatigue (Piper Fatigue Scale and Borg Fatigue Scale); anxiety and depression (Hospital Anxiety and Depression Scale); cognitive function (Trail Making Test and Auditory Consonant Trigram); accelerometry; lymphedema; and anthropometric perimeters. DISCUSSION This study investigates the feasibility and effectiveness of a telerehabilitation system during adjuvant treatment of patients with breast cancer. If this treatment option is effective, telehealth systems could offer a choice of supportive care to cancer patients during the survivorship phase. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT01801527.
Resumo:
A workshop was convened to discuss best practices for the assessment of drug-induced liver injury (DILI) in clinical trials. In a breakout session, workshop attendees discussed necessary data elements and standards for the accurate measurement of DILI risk associated with new therapeutic agents in clinical trials. There was agreement that in order to achieve this goal the systematic acquisition of protocol-specified clinical measures and lab specimens from all study subjects is crucial. In addition, standard DILI terms that address the diverse clinical and pathologic signatures of DILI were considered essential. There was a strong consensus that clinical and lab analyses necessary for the evaluation of cases of acute liver injury should be consistent with the US Food and Drug Administration (FDA) guidance on pre-marketing risk assessment of DILI in clinical trials issued in 2009. A recommendation that liver injury case review and management be guided by clinicians with hepatologic expertise was made. Of note, there was agreement that emerging DILI signals should prompt the systematic collection of candidate pharmacogenomic, proteomic and/or metabonomic biomarkers from all study subjects. The use of emerging standardized clinical terminology, CRFs and graphic tools for data review to enable harmonization across clinical trials was strongly encouraged. Many of the recommendations made in the breakout session are in alignment with those made in the other parallel sessions on methodology to assess clinical liver safety data, causality assessment for suspected DILI, and liver safety assessment in special populations (hepatitis B, C, and oncology trials). Nonetheless, a few outstanding issues remain for future consideration.
Resumo:
BACKGROUND & AIMS Hy's Law, which states that hepatocellular drug-induced liver injury (DILI) with jaundice indicates a serious reaction, is used widely to determine risk for acute liver failure (ALF). We aimed to optimize the definition of Hy's Law and to develop a model for predicting ALF in patients with DILI. METHODS We collected data from 771 patients with DILI (805 episodes) from the Spanish DILI registry, from April 1994 through August 2012. We analyzed data collected at DILI recognition and at the time of peak levels of alanine aminotransferase (ALT) and total bilirubin (TBL). RESULTS Of the 771 patients with DILI, 32 developed ALF. Hepatocellular injury, female sex, high levels of TBL, and a high ratio of aspartate aminotransferase (AST):ALT were independent risk factors for ALF. We compared 3 ways to use Hy's Law to predict which patients would develop ALF; all included TBL greater than 2-fold the upper limit of normal (×ULN) and either ALT level greater than 3 × ULN, a ratio (R) value (ALT × ULN/alkaline phosphatase × ULN) of 5 or greater, or a new ratio (nR) value (ALT or AST, whichever produced the highest ×ULN/ alkaline phosphatase × ULN value) of 5 or greater. At recognition of DILI, the R- and nR-based models identified patients who developed ALF with 67% and 63% specificity, respectively, whereas use of only ALT level identified them with 44% specificity. However, the level of ALT and the nR model each identified patients who developed ALF with 90% sensitivity, whereas the R criteria identified them with 83% sensitivity. An equal number of patients who did and did not develop ALF had alkaline phosphatase levels greater than 2 × ULN. An algorithm based on AST level greater than 17.3 × ULN, TBL greater than 6.6 × ULN, and AST:ALT greater than 1.5 identified patients who developed ALF with 82% specificity and 80% sensitivity. CONCLUSIONS When applied at DILI recognition, the nR criteria for Hy's Law provides the best balance of sensitivity and specificity whereas our new composite algorithm provides additional specificity in predicting the ultimate development of ALF.
Resumo:
The impact of antimicrobial resistance on clinical outcomes is the subject of ongoing investigations, although uncertainty remains about its contribution to mortality. We investigated the impact of carbapenem resistance on mortality in Pseudomonas aeruginosa bacteremia in a prospective multicenter (10 teaching hospitals) observational study of patients with monomicrobial bacteremia followed up for 30 days after the onset of bacteremia. The adjusted influence of carbapenem resistance on mortality was studied by using Cox regression analysis. Of 632 episodes, 487 (77%) were caused by carbapenem-susceptible P. aeruginosa (CSPA) isolates, and 145 (23%) were caused by carbapenem-resistant P. aeruginosa (CRPA) isolates. The median incidence density of nosocomial CRPA bacteremia was 2.3 episodes per 100,000 patient-days (95% confidence interval [CI], 1.9 to 2.8). The regression demonstrated a time-dependent effect of carbapenem resistance on mortality as well as a significant interaction with the Charlson index: the deleterious effect of carbapenem resistance on mortality decreased with higher Charlson index scores. The impact of resistance on mortality was statistically significant only from the fifth day after the onset of the bacteremia, reaching its peak values at day 30 (adjusted hazard ratio for a Charlson score of 0 at day 30, 9.9 [95% CI, 3.3 to 29.4]; adjusted hazard ratio for a Charlson score of 5 at day 30, 2.6 [95% CI, 0.8 to 8]). This study clarifies the relationship between carbapenem resistance and mortality in patients with P. aeruginosa bacteremia. Although resistance was associated with a higher risk of mortality, the study suggested that this deleterious effect may not be as great during the first days of the bacteremia or in the presence of comorbidities.
Resumo:
BACKGROUND A considerable percentage of multiple sclerosis patients have attentional impairment, but understanding its neurophysiological basis remains a challenge. The Attention Network Test allows 3 attentional networks to be studied. Previous behavioural studies using this test have shown that the alerting network is impaired in multiple sclerosis. The aim of this study was to identify neurophysiological indexes of the attention impairment in relapsing-remitting multiple sclerosis patients using this test. RESULTS After general slowing had been removed in patients group to isolate the effects of each condition, some behavioral differences between them were obtained. About Contingent Negative Variation, a statistically significant decrement were found in the amplitude for Central and Spatial Cue Conditions for patient group (p<0.05). ANOVAs showed for the patient group a significant latency delay for P1 and N1 components (p<0.05) and a decrease of P3 amplitude for congruent and incongruent stimuli (p<0.01). With regard to correlation analysis, PASAT-3s and SDMT showed significant correlations with behavioral measures of the Attention Network Test (p<0.01) and an ERP parameter (CNV amplitude). CONCLUSIONS Behavioral data are highly correlated with the neuropsychological scores and show that the alerting and orienting mechanisms in the patient group were impaired. Reduced amplitude for the Contingent Negative Variation in the patient group suggests that this component could be a physiological marker related to the alerting and orienting impairment in relapsing-remitting multiple sclerosis. P1 and N1 delayed latencies are evidence of the demyelination process that causes impairment in the first steps of the visual sensory processing. Lastly, P3 amplitude shows a general decrease for the pathological group probably indexing a more central impairment. These results suggest that the Attention Network Test give evidence of multiple levels of attention impairment, which could help in the assessment and treatment of relapsing-remitting multiple sclerosis patients.
Resumo:
The impact of the adequacy of empirical therapy on outcome for patients with bloodstream infections (BSI) is key for determining whether adequate empirical coverage should be prioritized over other, more conservative approaches. Recent systematic reviews outlined the need for new studies in the field, using improved methodologies. We assessed the impact of inadequate empirical treatment on the mortality of patients with BSI in the present-day context, incorporating recent methodological recommendations. A prospective multicenter cohort including all BSI episodes in adult patients was performed in 15 hospitals in Andalucía, Spain, over a 2-month period in 2006 to 2007. The main outcome variables were 14- and 30-day mortality. Adjusted analyses were performed by multivariate analysis and propensity score-based matching. Eight hundred one episodes were included. Inadequate empirical therapy was administered in 199 (24.8%) episodes; mortality at days 14 and 30 was 18.55% and 22.6%, respectively. After controlling for age, Charlson index, Pitt score, neutropenia, source, etiology, and presentation with severe sepsis or shock, inadequate empirical treatment was associated with increased mortality at days 14 and 30 (odds ratios [ORs], 2.12 and 1.56; 95% confidence intervals [95% CI], 1.34 to 3.34 and 1.01 to 2.40, respectively). The adjusted ORs after a propensity score-based matched analysis were 3.03 and 1.70 (95% CI, 1.60 to 5.74 and 0.98 to 2.98, respectively). In conclusion, inadequate empirical therapy is independently associated with increased mortality in patients with BSI. Programs to improve the quality of empirical therapy in patients with suspicion of BSI and optimization of definitive therapy should be implemented.
Resumo:
There is limited information on the role of penicillin-binding proteins (PBPs) in the resistance of Acinetobacter baumannii to β-lactams. This study presents an analysis of the allelic variations of PBP genes in A. baumannii isolates. Twenty-six A. baumannii clinical isolates (susceptible or resistant to carbapenems) from three teaching hospitals in Spain were included. The antimicrobial susceptibility profile, clonal pattern, and genomic species identification were also evaluated. Based on the six complete genomes of A. baumannii, the PBP genes were identified, and primers were designed for each gene. The nucleotide sequences of the genes identified that encode PBPs and the corresponding amino acid sequences were compared with those of ATCC 17978. Seven PBP genes and one monofunctional transglycosylase (MGT) gene were identified in the six genomes, encoding (i) four high-molecular-mass proteins (two of class A, PBP1a [ponA] and PBP1b [mrcB], and two of class B, PBP2 [pbpA or mrdA] and PBP3 [ftsI]), (ii) three low-molecular-mass proteins (two of type 5, PBP5/6 [dacC] and PBP6b [dacD], and one of type 7 (PBP7/8 [pbpG]), and (iii) a monofunctional enzyme (MtgA [mtgA]). Hot spot mutation regions were observed, although most of the allelic changes found translated into silent mutations. The amino acid consensus sequences corresponding to the PBP genes in the genomes and the clinical isolates were highly conserved. The changes found in amino acid sequences were associated with concrete clonal patterns but were not directly related to susceptibility or resistance to β-lactams. An insertion sequence disrupting the gene encoding PBP6b was identified in an endemic carbapenem-resistant clone in one of the participant hospitals.
Resumo:
The prefrontal (PFC) and orbitofrontal cortex (OFC) appear to be associated with both executive functions and olfaction. However, there is little data relating olfactory processing and executive functions in humans. The present study aimed at exploring the role of olfaction on executive functioning, making a distinction between primary and more cognitive aspects of olfaction. Three executive tasks of similar difficulty were used. One was used to assess hot executive functions (Iowa Gambling Task-IGT), and two as a measure of cold executive functioning (Stroop Colour and Word Test-SCWT and Wisconsin Card Sorting Test-WCST). Sixty two healthy participants were included: 31 with normosmia and 31 with hyposmia. Olfactory abilities were assessed using the ''Sniffin' Sticks'' test and the olfactory threshold, odour discrimination and odour identification measures were obtained. All participants were female, aged between 18 and 60. Results showed that participants with hyposmia displayed worse performance in decision making (IGT; Cohen's-d = 0.91) and cognitive flexibility (WCST; Cohen's-d between 0.54 and 0.68) compared to those with normosmia. Multiple regression adjusted by the covariates participants' age and education level showed a positive association between odour identification and the cognitive inhibition response (SCWT-interference; Beta = 0.29; p = .034). The odour discrimination capacity was not a predictor of the cognitive executive performance. Our results suggest that both hot and cold executive functions seem to be associated with higher-order olfactory functioning in humans. These results robustly support the hypothesis that olfaction and executive measures have a common neural substrate in PFC and OFC, and suggest that olfaction might be a reliable cognitive marker in psychiatric and neurologic disorders.