984 resultados para STANDARD REFERENCE INTERVALS
Resumo:
Anaemia is amongst the major complications of malaria, a major public health problem in the Amazon Region in Latin America. We examined the haemoglobin (Hb) concentrations of malaria-infected patients and compared it to that of malaria-negative febrile patients and afebrile controls. The haematological parameters of febrile patients who had a thick-blood-smear performed at an infectious diseases reference centre of the Brazilian Amazon between December 2009-January 2012 were retrieved together with clinical data. An afebrile community control group was composed from a survey performed in a malaria-endemic area. Hb concentrations and anaemia prevalence were analysed according to clinical-epidemiological status and demographic characteristics. In total, 7,831 observations were included. Patients with Plasmodium falciparum infection had lower mean Hb concentrations (10.5 g/dL) followed by P. vivax-infected individuals (12.4 g/dL), community controls (12.8 g/dL) and malaria-negative febrile patients (13.1 g/dL) (p < 0.001). Age, gender and clinical-epidemiological status were strong independent predictors for both outcomes. Amongst malaria-infected individuals, women in the reproductive age had considerably lower Hb concentrations. In this moderate transmission intensity setting, both vivax and falciparum malaria are associated with reduced Hb concentrations and risk of anaemia throughout a wide age range.
Resumo:
Human immunodeficiency virus (HIV)-positive patients have a greater prevalence of coinfection with human papillomavirus (HPV) is of high oncogenic risk. Indeed, the presence of the virus favours intraepithelial squamous cell lesion progression and may induce cancer. The aim of this study was to evaluate the prevalence of HPV infection, distribution of HPV types and risk factors among HIV-positive patients. Cervical samples from 450 HIV-positive patients were analysed with regard to oncotic cytology, colposcopy and HPV presence and type by means of polymerase chain reaction and sequencing. The results were analysed by comparing demographic data and data relating to HPV and HIV infection. The prevalence of HPV was 47.5%. Among the HPV-positive samples, 59% included viral types of high oncogenic risk. Multivariate analysis showed an association between HPV infection and the presence of cytological alterations (p = 0.003), age greater than or equal to 35 years (p = 0.002), number of partners greater than three (p = 0.002), CD4+ lymphocyte count < 200/mm3 (p = 0.041) and alcohol abuse (p = 0.004). Although high-risk HPV was present in the majority of the lesions studied, the low frequency of HPV 16 (3.3%), low occurrence of cervical lesions and preserved immunological state in most of the HIV-positive patients were factors that may explain the low occurrence of precancerous cervical lesions in this population.
Resumo:
Dilatation of the ascending aorta (AAD) is a prevalent aortopathy that occurs frequently associated with bicuspid aortic valve (BAV), the most common human congenital cardiac malformation. The molecular mechanisms leading to AAD associated with BAV are still poorly understood. The search for differentially expressed genes in diseased tissue by quantitative real-time PCR (qPCR) is an invaluable tool to fill this gap. However, studies dedicated to identify reference genes necessary for normalization of mRNA expression in aortic tissue are scarce. In this report, we evaluate the qPCR expression of six candidate reference genes in tissue from the ascending aorta of 52 patients with a variety of clinical and demographic characteristics, normal and dilated aortas, and different morphologies of the aortic valve (normal aorta and normal valve n = 30; dilated aorta and normal valve n = 10; normal aorta and BAV n = 4; dilated aorta and BAV n = 8). The expression stability of the candidate reference genes was determined with three statistical algorithms, GeNorm, NormFinder and Bestkeeper. The expression analyses showed that the most stable genes for the three algorithms employed were CDKN1β, POLR2A and CASC3, independently of the structure of the aorta and the valve morphology. In conclusion, we propose the use of these three genes as reference genes for mRNA expression analysis in human ascending aorta. However, we suggest searching for specific reference genes when conducting qPCR experiments with new cohort of samples.
Resumo:
BACKGROUND It is not clear to what extent educational programs aimed at promoting diabetes self-management in ethnic minority groups are effective. The aim of this work was to systematically review the effectiveness of educational programs to promote the self-management of racial/ethnic minority groups with type 2 diabetes, and to identify programs' characteristics associated with greater success. METHODS We undertook a systematic literature review. Specific searches were designed and implemented for Medline, EMBASE, CINAHL, ISI Web of Knowledge, Scirus, Current Contents and nine additional sources (from inception to October 2012). We included experimental and quasi-experimental studies assessing the impact of educational programs targeted to racial/ethnic minority groups with type 2 diabetes. We only included interventions conducted in countries members of the OECD. Two reviewers independently screened citations. Structured forms were used to extract information on intervention characteristics, effectiveness, and cost-effectiveness. When possible, we conducted random-effects meta-analyses using standardized mean differences to obtain aggregate estimates of effect size with 95% confidence intervals. Two reviewers independently extracted all the information and critically appraised the studies. RESULTS We identified thirty-seven studies reporting on thirty-nine educational programs. Most of them were conducted in the US, with African American or Latino participants. Most programs obtained some benefits over standard care in improving diabetes knowledge, self-management behaviors and clinical outcomes. A meta-analysis of 20 randomized controlled trials (3,094 patients) indicated that the programs produced a reduction in glycated hemoglobin of -0.31% (95% CI -0.48% to -0.14%). Diabetes knowledge and self-management measures were too heterogeneous to pool. Meta-regressions showed larger reduction in glycated hemoglobin in individual and face to face delivered interventions, as well as in those involving peer educators, including cognitive reframing techniques, and a lower number of teaching methods. The long-term effects remain unknown and cost-effectiveness was rarely estimated. CONCLUSIONS Diabetes self-management educational programs targeted to racial/ethnic minority groups can produce a positive effect on diabetes knowledge and on self-management behavior, ultimately improving glycemic control. Future programs should take into account the key characteristics identified in this review.
Resumo:
Purpose: To develop and evaluate a practical method for the quantification of signal-to-noise ratio (SNR) on coronary MR angiograms (MRA) acquired with parallel imaging.Materials and Methods: To quantify the spatially varying noise due to parallel imaging reconstruction, a new method has been implemented incorporating image data acquisition followed by a fast noise scan during which radio-frequency pulses, cardiac triggering and navigator gating are disabled. The performance of this method was evaluated in a phantom study where SNR measurements were compared with those of a reference standard (multiple repetitions). Subsequently, SNR of myocardium and posterior skeletal muscle was determined on in vivo human coronary MRA.Results: In a phantom, the SNR measured using the proposed method deviated less than 10.1% from the reference method for small geometry factors (<= 2). In vivo, the noise scan for a 10 min coronary MRA acquisition was acquired in 30 s. Higher signal and lower SNR, due to spatially varying noise, were found in myocardium compared with posterior skeletal muscle.Conclusion: SNR quantification based on a fast noise scan is a validated and easy-to-use method when applied to three-dimensional coronary MRA obtained with parallel imaging as long as the geometry factor remains low.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
A workshop was convened to discuss best practices for the assessment of drug-induced liver injury (DILI) in clinical trials. In a breakout session, workshop attendees discussed necessary data elements and standards for the accurate measurement of DILI risk associated with new therapeutic agents in clinical trials. There was agreement that in order to achieve this goal the systematic acquisition of protocol-specified clinical measures and lab specimens from all study subjects is crucial. In addition, standard DILI terms that address the diverse clinical and pathologic signatures of DILI were considered essential. There was a strong consensus that clinical and lab analyses necessary for the evaluation of cases of acute liver injury should be consistent with the US Food and Drug Administration (FDA) guidance on pre-marketing risk assessment of DILI in clinical trials issued in 2009. A recommendation that liver injury case review and management be guided by clinicians with hepatologic expertise was made. Of note, there was agreement that emerging DILI signals should prompt the systematic collection of candidate pharmacogenomic, proteomic and/or metabonomic biomarkers from all study subjects. The use of emerging standardized clinical terminology, CRFs and graphic tools for data review to enable harmonization across clinical trials was strongly encouraged. Many of the recommendations made in the breakout session are in alignment with those made in the other parallel sessions on methodology to assess clinical liver safety data, causality assessment for suspected DILI, and liver safety assessment in special populations (hepatitis B, C, and oncology trials). Nonetheless, a few outstanding issues remain for future consideration.
Resumo:
This paper introduces the evaluation report after fostering a Standard-based Interoperability Framework (SIF) between the Virgen del Rocío University Hospital (VRUH) Haemodialysis (HD) Unit and 5 outsourced HD centres in order to improve integrated care by automatically sharing patients' Electronic Health Record (EHR) and lab test reports. A pre-post study was conducted during fourteen months. The number of lab test reports of both emergency and routine nature regarding to 379 outpatients was computed before and after the integration of the SIF. Before fostering SIF, 19.38 lab tests per patient were shared between VRUH and HD centres, 5.52 of them were of emergency nature while 13.85 were routine. After integrating SIF, 17.98 lab tests per patient were shared, 3.82 of them were of emergency nature while 14.16 were routine. The inclusion of a SIF in the HD Integrated Care Process has led to an average reduction of 1.39 (p=0.775) lab test requests per patient, including a reduction of 1.70 (p=0.084) in those of emergency nature, whereas an increase of 0.31 (p=0.062) was observed in routine lab tests. Fostering this strategy has led to the reduction in emergency lab test requests, which implies a potential improvement of the integrated care.
Resumo:
In Brazil, human and canine visceral leishmaniasis (CVL) caused byLeishmania infantum has undergone urbanisation since 1980, constituting a public health problem, and serological tests are tools of choice for identifying infected dogs. Until recently, the Brazilian zoonoses control program recommended enzyme-linked immunosorbent assays (ELISA) and indirect immunofluorescence assays (IFA) as the screening and confirmatory methods, respectively, for the detection of canine infection. The purpose of this study was to estimate the accuracy of ELISA and IFA in parallel or serial combinations. The reference standard comprised the results of direct visualisation of parasites in histological sections, immunohistochemical test, or isolation of the parasite in culture. Samples from 98 cases and 1,327 noncases were included. Individually, both tests presented sensitivity of 91.8% and 90.8%, and specificity of 83.4 and 53.4%, for the ELISA and IFA, respectively. When tests were used in parallel combination, sensitivity attained 99.2%, while specificity dropped to 44.8%. When used in serial combination (ELISA followed by IFA), decreased sensitivity (83.3%) and increased specificity (92.5%) were observed. Serial testing approach improved specificity with moderate loss in sensitivity. This strategy could partially fulfill the needs of public health and dog owners for a more accurate diagnosis of CVL.
Resumo:
Hepatitis D virus (HDV) is endemic in the Amazon Region and its pathophysiology is the most severe among viral hepatitis. Treatment is performed with pegylated interferon and the immune response appears to be important for infection control. HDV patients were studied: untreated and polymerase chain reaction (PCR) positive (n = 9), anti-HDV positive and PCR negative (n = 8), and responders to treatment (n = 12). The cytokines, interleukin (IL)-2 (p = 0.0008) and IL-12 (p = 0.02) were differentially expressed among the groups and were also correlated (p = 0.0143). Future studies will be conducted with patients at different stages of treatment, associating the viral load with serum cytokines produced, thereby attempting to establish a prognostic indicator of the infection.
Resumo:
Objectives. The goal of this study is to evaluate a T2-mapping sequence by: (i) measuring the reproducibility intra- and inter-observer variability in healthy volunteers in two separate scanning session with a T2 reference phantom; (2) measuring the mean T2 relaxation times by T2-mapping in infarcted myocardium in patients with subacute MI and compare it with patient's the gold standard X-ray coronary angiography and healthy volunteers results. Background. Myocardial edema is a consequence of an inflammation of the tissue, as seen in myocardial infarct (MI). It can be visualized by cardiovascular magnetic resonance (CMR) imaging using the T2 relaxation time. T2-mapping is a quantitative methodology that has the potential to address the limitation of the conventional T2-weighted (T2W) imaging. Methods. The T2-mapping protocol used for all MRI scans consisted in a radial gradient echo acquisition with a lung-liver navigator for free-breathing acquisition and affine image registration. Mid-basal short axis slices were acquired.T2-maps analyses: 2 observers semi- automatically segmented the left ventricle in 6 segments accordingly to the AHA standards. 8 healthy volunteers (age: 27 ± 4 years; 62.5% male) were scanned in 2 separate sessions. 17 patients (age : 61.9 ± 13.9 years; 82.4% male) with subacute STEMI (70.6%) and NSTEMI underwent a T2-mapping scanning session. Results. In healthy volunteers, the mean inter- and intra-observer variability over the entire short axis slice (segment 1 to 6) was 0.1 ms (95% confidence interval (CI): -0.4 to 0.5, p = 0.62) and 0.2 ms (95% CI: -2.8 to 3.2, p = 0.94, respectively. T2 relaxation time measurements with and without the correction of the phantom yielded an average difference of 3.0 ± 1.1 % and 3.1 ± 2.1 % (p = 0.828), respectively. In patients, the inter-observer variability in the entire short axis slice (S1-S6), was 0.3 ms (95% CI: -1.8 to 2.4, p = 0.85). Edema location as determined through the T2-mapping and the coronary artery occlusion as determined on X-ray coronary angiography correlated in 78.6%, but only in 60% in apical infarcts. All except one of the maximal T2 values in infarct patients were greater than the upper limit of the 95% confidence interval for normal myocardium. Conclusions. The T2-mapping methodology is accurate in detecting infarcted, i.e. edematous tissue in patients with subacute infarcts. This study further demonstrated that this T2-mapping technique is reproducible and robust enough to be used on a segmental basis for edema detection without the need of a phantom to yield a T2 correction factor. This new quantitative T2-mapping technique is promising and is likely to allow for serial follow-up studies in patients to improve our knowledge on infarct pathophysiology, on infarct healing, and for the assessment of novel treatment strategies for acute infarctions.
Resumo:
BACKGROUND Little is known about the healthcare process for patients with prostate cancer, mainly because hospital-based data are not routinely published. The main objective of this study was to determine the clinical characteristics of prostate cancer patients, the, diagnostic process and the factors that might influence intervals from consultation to diagnosis and from diagnosis to treatment. METHODS We conducted a multicentre, cohort study in seven hospitals in Spain. Patients' characteristics and diagnostic and therapeutic variables were obtained from hospital records and patients' structured interviews from October 2010 to September 2011. We used a multilevel logistic regression model to examine the association between patient care intervals and various variables influencing these intervals (age, BMI, educational level, ECOG, first specialist consultation, tumour stage, PSA, Gleason score, and presence of symptoms) and calculated the odds ratio (OR) and the interquartile range (IQR). To estimate the random inter-hospital variability, we used the median odds ratio (MOR). RESULTS 470 patients with prostate cancer were included. Mean age was 67.8 (SD: 7.6) years and 75.4 % were physically active. Tumour size was classified as T1 in 41.0 % and as T2 in 40 % of patients, their median Gleason score was 6.0 (IQR:1.0), and 36.1 % had low risk cancer according to the D'Amico classification. The median interval between first consultation and diagnosis was 89 days (IQR:123.5) with no statistically significant variability between centres. Presence of symptoms was associated with a significantly longer interval between first consultation and diagnosis than no symptoms (OR:1.93, 95%CI 1.29-2.89). The median time between diagnosis and first treatment (therapeutic interval) was 75.0 days (IQR:78.0) and significant variability between centres was found (MOR:2.16, 95%CI 1.45-4.87). This interval was shorter in patients with a high PSA value (p = 0.012) and a high Gleason score (p = 0.026). CONCLUSIONS Most incident prostate cancer patients in Spain are diagnosed at an early stage of an adenocarcinoma. The period to complete the diagnostic process is approximately three months whereas the therapeutic intervals vary among centres and are shorter for patients with a worse prognosis. The presence of prostatic symptoms, PSA level, and Gleason score influence all the clinical intervals differently.
Resumo:
Objective: To determine the values of, and study the relationships among, central corneal thickness (CCT), intraocular pressure (IOP), and degree of myopia (DM) in an adult myopic population aged 20 to 40 years in Almeria (southeast Spain). To our knowledge this is first study of this kind in this region. Methods: An observational, descriptive, cross-sectional study was done in which a sample of 310 myopic patients (620 eyes) aged 20 to 40 years was selected by gender- and age-stratified sampling, which was proportionally fixed to the size of the population strata for which a 20% prevalence of myopia, 5% epsilon, and a 95% confidence interval were hypothesized. We studied IOP, CCT, and DM and their relationships by calculating the mean, standard deviation, 95% confidence interval for the mean, median, Fisher’s asymmetry coefficient, range (maximum, minimum), and the Brown-Forsythe’s robust test for each variable (IOP, CCT, and DM). Results: In the adult myopic population of Almeria aged 20 to 40 years (mean of 29.8), the mean overall CCT was 550.12 μm. The corneas of men were thicker than those of women (P = 0.014). CCT was stable as no significant differences were seen in the 20- to 40-year-old subjects’ CCT values. The mean overall IOP was 13.60 mmHg. Men had a higher IOP than women (P = 0.002). Subjects over 30 years (13.83) had a higher IOP than those under 30 (13.38) (P = 0.04). The mean overall DM was −4.18 diopters. Men had less myopia than women (P < 0.001). Myopia was stable in the 20- to 40-year-old study population (P = 0.089). A linear relationship was found between CCT and IOP (R2 = 0.152, P ≤ 0.001). CCT influenced the IOP value by 15.2%. However no linear relationship between DM and IOP, or between CCT and DM, was found. Conclusions: CCT was found to be similar to that reported in other studies in different populations. IOP tends to increase after the age of 30 and is not accounted for by alterations in CCT values.
Resumo:
Résumé Introduction : La conjonctivite giganto-papillaire chez des patients porteurs de lentilles de contact survient lors d'une intolérance et/ou d'une allergie aux lentilles de contact. L'éotaxine est un CC chémokine produisant un puissant effet chémotactique sur les éosinophiles, qui sont impliqués dans les allergies. Le but de cette étude est de mesurer le taux d'éotaxine dans les larmes de patients porteurs de lentilles de contact et de le comparer à celui de sujets normaux. Les taux d'éotaxine sont également corrélés avec le degré de conjonctivite giganto-papillaire. Méthode : Environ 10 Ill de larmes ont été collectés avec une rnicropipette en verre chez 16 patients porteurs de lentilles de contact et chez 10 volontaires normaux. La conjonctivite giganto-papillaire a été évaluée selon une échelle de 0 à 4 en référence à des images photographiques de la paupière supérieure réalisées à la lampe à fente. La concentration de l'éotaxine dans les larmes a été mesurée par un ELISA utilisant un anticorps d'éotaxine de souris dirigé contre l'anticorps humain. Pour l'analyse statistique des résultats, le test de Wilcox/Kruskal-Wallis a été utilisé. Résultats : La concentration moyenne d'éotaxine était de 2698 +233 (SEM) pg/ml chez les patients porteurs de lentilles de contact et de 1498 139 pg/ml chez les sujets normaux. La différence était statistiquement significative avec P = 0.0004. Le score moyen des papilles était de 1.75 ±0.19 chez les patients porteurs de lentilles de contact et de 0.2 +0.13 chez les sujets normaux (P <0.0001). Le grading des papilles a pu être mis en relation avec le taux d'éotaxine dans les larmes (R2- 0.6562 avec P <0.0001). Conclusion : Une augmentation du taux d'éotaxine dans les larmes a été mesurée chez les patients porteurs de lentilles de contact. Les taux d'éotaxine ont été corrélés avec la sévérité de la conjonctivite giganto-papillaire. Ces données suggèrent que l'éotaxine pourrait jouer un rôle important dans la formation des papilles. Abstract : Purpose: Giant papillary conjunctivitis in patients wearing contact lenses occurs after intolerance and/or allergy to contact lenses. Eotaxin is a CC chemokine with a potent and specific chemotactic effect for eosinophils, which are involved in allergies. The purpose of this study is to measure the eotaxin levels in tears of patients wearing contact lenses and in normal subjects. Eotaxin levels were also correlated with the grade of giant papillary conjunctivitis. Methods: Around 10µL of tears were collected with glass capillaries in 16 patients wearing contact lenses and in 10 normal volunteers. Giant papillary conjunctivitis was graded from 0 to 4 by reference to standard slit-lamp photographs of the superior tarsal conjunctiva. Eotaxin concentration in tears was measured by ELSA using mouse anti-human eotaxin monoclonal antibodies. For the statistical analysis of the results, the paired Wilcoxon/Kruskai-Wallis test was used. Results: The mean concentration of eotaxin was 2698 ± 233 (SEM) pg/mL in patients wearing contact lenses and 1498 ± 139 pg/mL normal subjects. The difference was statistically significant (P =0.0004). The mean score of papilla grade was 1.75 ± 0.19 in patients wearing contact lenses and 01 ± 0.13 in normal subjects (P < 0.0001). Papilla grade could be correlated to the eotaxin level in tears (R2 = 0.6562 and P< 0.0001), Conclusion: An increase of eotaxin levels in tears was measured in patients wearing contact lenses. Eotaxin levels correlated with the severity of giant papillary conjunctivitis. These data suggest that eotaxin could play a role in papilla formation.
Resumo:
Therapeutic hypothermia (TH) is considered a standard of care in the post-resuscitation phase of cardiac arrest. In experimental models of traumatic brain injury (TBI), TH was found to have neuroprotective properties. However, TH failed to demonstrate beneficial effects on neurological outcome in patients with TBI. The absence of benefits of TH uniformly applied in TBI patients should not question the use of TH as a second-tier therapy to treat elevated intracranial pressure. The management of all the practical aspects of TH is a key factor to avoid side effects and to optimize the potential benefit of TH in the treatment of intracranial hypertension. Induction of TH can be achieved with external surface cooling or with intra-vascular devices. The therapeutic target should be set at a 35°C using brain temperature as reference, and should be maintained at least during 48 hours and ideally over the entire period of elevated intracranial pressure. The control of the rewarming phase is crucial to avoid temperature overshooting and should not exceed 1°C/day. Besides its use in the management of intracranial hypertension, therapeutic cooling is also essential to treat hyperthermia in brain-injured patients. In this review, we will discuss the benefit-risk balance and practical aspects of therapeutic temperature management in TBI patients.