931 resultados para Receiver
Resumo:
Distinguishing drug-induced liver injury (DILI) from idiopathic autoimmune hepatitis (AIH) can be challenging. We performed a standardized histologic evaluation to explore potential hallmarks to differentiate AIH versus DILI. Biopsies from patients with clinically well-characterized DILI [n = 35, including 19 hepatocellular injury (HC) and 16 cholestatic/mixed injury (CS)] and AIH (n = 28) were evaluated for Ishak scores, prominent inflammatory cell types in portal and intra-acinar areas, the presence or absence of emperipolesis, rosette formation, and cholestasis in a blinded fashion by four experienced hepatopathologists. Histologic diagnosis was concordant with clinical diagnosis in 65% of cases; but agreement on final diagnosis among the four pathologists was complete in only 46% of cases. Interface hepatitis, focal necrosis, and portal inflammation were present in all evaluated cases, but were more severe in AIH (P < 0.05) than DILI (HC). Portal and intra-acinar plasma cells, rosette formation, and emperiopolesis were features that favored AIH (P < 0.02). A model combining portal inflammation, portal plasma cells, intra-acinar lymphocytes and eosinophils, rosette formation, and canalicular cholestasis yielded an area under the receiver operating characteristic curve (AUROC) of 0.90 in predicting DILI (HC) versus AIH. All Ishak inflammation scores were more severe in AIH than DILI (CS) (P ≤ 0.05). The four AIH-favoring features listed above were consistently more prevalent in AIH, whereas portal neutrophils and intracellular (hepatocellular) cholestasis were more prevalent in DILI (CS) (P < 0.02). The combination of portal inflammation, fibrosis, portal neutrophils and plasma cells, and intracellular (hepatocellular) cholestasis yielded an AUC of 0.91 in predicting DILI (CS) versus AIH. Conclusion: Although an overlap of histologic findings exists for AIH and DILI, sufficient differences exist so that pathologists can use the pattern of injury to suggest the correct diagnosis.
Resumo:
In a bankruptcy situation, not all claimants are affected in the same way. In particular, some depositors may enter into a situation of personal bankruptcy if they lose part of their investments. Events of this kind may lead to a social catastrophe. We propose discrimination among the claimants as a possible solution. A fact considered in the American bankruptcy law (among others) that establishes some discrimination on the claimants, or the Santander Bank that in the Madoff’s case reimbursed only the deposits to its particular customers. Moreover, the necessity of discriminating has already been mentioned in different contexts by Young (1988), Bossert (1995), Thomson (2003) and Pulido et al. (2002, 2007), for instance. In this paper, we take a bankruptcy solution as the reference point. Given this initial allocation, we make transfers from richer to poorer with the purpose of distributing not only the personal incurred losses as evenly as possible but also the transfers in a progressive way. The agents are divided into two groups depending on their personal monetary value (wealth, net-income, GDP or any other characteristic). Then, we impose a set of Axioms that bound the maximal transfer that each net-contributor can make and each net-receiver can obtain. Finally, we define a value discriminant solution, and we characterize it by means of the Lorenz criterion. Endogenous convex combinations between solutions are also considered. Keywords: Bankruptcy, Discrimination, Compensation, Rules JEL classification: C71, D63, D71.
Resumo:
Some forensic and clinical circumstances require knowledge of the frequency of drug use. Care of the patient, administrative, and legal consequences will be different if the subject is a regular or an occasional cannabis smoker. To this end, 11-nor-9-carboxy-Δ9-tetrahydrocannabinol (THCCOOH) has been proposed as a criterion to help to distinguish between these two groups of users. However, to date this indicator has not been adequately assessed under experimental conditions. We carried out a controlled administration study of smoked cannabis with a placebo. Cannabinoid levels were determined in whole blood using tandem mass spectrometry. Significantly high differences in THCCOOH concentrations were found between the two groups when measured during the screening visit, prior to the smoking session, and throughout the day of the experiment. Receiver operating characteristic (ROC) curves were determined and two threshold criteria were proposed in order to distinguish between these groups: a free THCCOOH concentration below 3 µg/L suggested an occasional consumption (≤ 1 joint/week) while a concentration higher than 40 µg/L corresponded to a heavy use (≥ 10 joints/month). These thresholds were tested and found to be consistent with previously published experimental data. The decision threshold of 40 µg/L could be a cut-off for possible disqualification for driving while under the influence of cannabis. A further medical assessment and follow-up would be necessary for the reissuing of a driving license once abstinence from cannabis has been demonstrated. A THCCOOH level below 3 µg/L would indicate that no medical assessment is required. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
BACKGROUND: The elderly population is particularly at risk for developing vitamin B12-deficiency. Serum cobalamin does not necessarily reflect a normal B12 status. The determination of methylmalonic acid is not available in all laboratories. Issues of sensitivity for holotranscobalamin and the low specificity of total homocysteine limit their utility. The aim of the present study is to establish a diagnostic algorithm by using a combination of these markers in place of a single measurement. METHODS: We compared the diagnostic efficiency of these markers for detection of vitamin B12 deficiency in a population (n = 218) of institutionalized elderly (median age 80 years). Biochemical, haematological and morphological data were used to categorize people with or without vitamin B12 deficiency. RESULTS: In receiver operating curves characteristics for detection on vitamin B12 deficiency using single measurements, serum folate has the greatest area under the curve (0.87) and homocysteine the lowest (0.67). The best specificity was observed for erythrocyte folate and methylmalonic acid (100% for both) but their sensitivity was very low (17% and 53%, respectively). The highest sensitivity was observed for homocysteine (81%) and serum folate (74%). When we combined these markers, starting with serum and erythrocyte folate, followed by holotranscobalamin and ending by methylmalonic acid measurements, the overall sensitivity and specificity of the algorithm were 100% and 90%, respectively. CONCLUSION: The proposed algorithm, which combines erythrocyte folate, serum folate, holotranscobalamin and methylmalonic acid, but eliminate B12 and tHcy measurements, is a useful alternative for vitamin B12 deficiency screening in an elderly institutionalized cohort.
Resumo:
OBJECTIVES To evaluate the advantages of cytology and PCR of high-risk human papilloma virus (PCR HR-HPV) infection in biopsy-derived diagnosis of high-grade squamous intraepithelial lesions (HSIL = AIN2/AIN3) in HIV-positive men having sex with men (MSM). METHODS This is a single-centered study conducted between May 2010 and May 2014 in patients (n = 201, mean age 37 years) recruited from our outpatient clinic. Samples of anal canal mucosa were taken into liquid medium for PCR HPV analysis and for cytology. Anoscopy was performed for histology evaluation. RESULTS Anoscopy showed 33.8% were normal, 47.8% low-grade squamous intraepithelial lesions (LSIL), and 18.4% HSIL; 80.2% had HR-HPV. PCR of HR-HPV had greater sensitivity than did cytology (88.8% vs. 75.7%) in HSIL screening, with similar positive (PPV) and negative predictive value (NPV) of 20.3 vs. 22.9 and 89.7 vs. 88.1, respectively. Combining both tests increased the sensitivity and NPV of HSIL diagnosis to 100%. Correlation of cytology vs. histology was, generally, very low and PCR of HR-HPV vs. histology was non-existent (<0.2) or low (<0.4). Area under the receiver operating characteristics (AUROC) curve analysis of cytology and PCR HR-HPV for the diagnosis of HSIL was poor (<0.6). Multivariate regression analysis showed protective factors against HSIL were: viral suppression (OR: 0.312; 95%CI: 0.099-0.984), and/or syphilis infection (OR: 0.193; 95%CI: 0.045-0.827). HSIL risk was associated with HPV-68 genotype (OR: 20.1; 95%CI: 2.04-197.82). CONCLUSIONS When cytology and PCR HR-HPV findings are normal, the diagnosis of pre-malignant HSIL can be reliably ruled-out in HIV-positive patients. HPV suppression with treatment protects against the appearance of HSIL.
Resumo:
OBJECTIVES: The aim of this study was to evaluate new electrocardiographic (ECG) criteria for discriminating between incomplete right bundle branch block (RBBB) and the Brugada types 2 and 3 ECG patterns. BACKGROUND: Brugada syndrome can manifest as either type 2 or type 3 pattern. The latter should be distinguished from incomplete RBBB, present in 3% of the population. METHODS: Thirty-eight patients with either type 2 or type 3 Brugada pattern that were referred for an antiarrhythmic drug challenge (AAD) were included. Before AAD, 2 angles were measured from ECG leads V(1) and/or V(2) showing incomplete RBBB: 1) α, the angle between a vertical line and the downslope of the r'-wave, and 2) β, the angle between the upslope of the S-wave and the downslope of the r'-wave. Baseline angle values, alone or combined with QRS duration, were compared between patients with negative and positive results on AAD. Receiver-operating characteristic curves were constructed to identify optimal discriminative cutoff values. RESULTS: The mean β angle was significantly smaller in the 14 patients with negative results on AAD compared to the 24 patients with positive results on AAD (36 ± 20° vs. 62 ± 20°, p < 0.01). Its optimal cutoff value was 58°, which yielded a positive predictive value of 73% and a negative predictive value of 87% for conversion to type 1 pattern on AAD; α was slightly less sensitive and specific compared with β. When the angles were combined with QRS duration, it tended to improve discrimination. CONCLUSIONS: In patients with suspected Brugada syndrome, simple ECG criteria can enable discrimination between incomplete RBBB and types 2 and 3 Brugada patterns.
Resumo:
The important inflow of foreign population to western countries has boosted the study of acculturation processes among scholars in the last decades. By using the case of Catalonia, a receiver region of international and national migration since the fifties, this paper seeks to intersect a classic acculturation model and a newly reemerging literature in political science on contextual determinants on individual behavior. Does the context matters for understanding individual’s subjective national identity and, therefore, its voting behavior? Multilevel models show that environment matters. Percentage of Spain-born population in the town is statistically significant to account for variance in the subjective national identity and nationalist vote, even after controlling for age, sex, origin, language and left – right orientation and other contextual factors. This conclusion invites researchers not to underestimate the direct effect of the environment on individual outcomes such as feelings of belonging and vote orientation in contexts of rival identities.
Resumo:
BACKGROUND: Antinucleosome autoantibodies were previously described to be a marker of active lupus nephritis. However, the true prevalence of antinucleosome antibodies at the time of active proliferative lupus nephritis has not been well established. Therefore, the aim of this study is to define the prevalence and diagnostic value of autoantibodies against nucleosomes as a marker for active proliferative lupus nephritis. STUDY DESIGN: Prospective multicenter diagnostic test study. SETTING & PARTICIPANTS: 35 adult patients with systemic lupus erythematosus (SLE) at the time of the renal biopsy showing active class III or IV lupus nephritis compared with 59 control patients with SLE. INDEX TEST: Levels of antinucleosome antibodies and anti-double-stranded DNA (anti-dsDNA) antibodies. REFERENCE TEST: Kidney biopsy findings of class III or IV lupus nephritis at the time of sampling in a study population versus clinically inactive or no nephritis in a control population. RESULTS: Increased concentrations of antinucleosome antibodies were found in 31 of 35 patients (89%) with active proliferative lupus nephritis compared with 47 of 59 control patients (80%) with SLE. No significant difference between the 2 groups with regard to number of positive patients (P = 0.2) or antibody concentrations (P = 0.2) could be found. The area under the receiver operating characteristic curve as a marker of the accuracy of the test in discriminating between proliferative lupus nephritis and inactive/no nephritis in patients with SLE was 0.581 (95% confidence interval, 0.47 to 0.70; P = 0.2). Increased concentrations of anti-dsDNA antibodies were found in 33 of 35 patients (94.3%) with active proliferative lupus nephritis compared with 49 of 58 control patients (84.5%) with SLE (P = 0.2). In patients with proliferative lupus nephritis, significantly higher titers of anti-dsDNA antibodies were detected compared with control patients with SLE (P < 0.001). The area under the receiver operating characteristic curve in discriminating between proliferative lupus nephritis and inactive/no nephritis in patients with SLE was 0.710 (95% confidence interval, 0.60 to 0.82; P < 0.001). CONCLUSIONS: Antinucleosome antibodies have a high prevalence in patients with severe lupus nephritis. However, our data suggest that determining antinucleosome antibodies is of limited help in the distinction of patients with active proliferative lupus nephritis from patients with SLE without active renal disease.
Resumo:
The relative contributions of Alzheimer disease (AD) and vascular lesion burden to the occurrence of cognitive decline are more difficult to define in the oldest-old than they are in younger cohorts. To address this issue, we examined 93 prospectively documented autopsy cases from 90 to 103 years with various degrees of AD lesions, lacunes, and microvascular pathology. Cognitive assessment was performed prospectively using the Clinical Dementia Rating scale. Neuropathologic evaluation included the Braak neurofibrillary tangle (NFT) and β-amyloid (Aβ) protein deposition staging and bilateral semiquantitative assessment of vascular lesions. Statistics included regression models and receiver operating characteristic analyses. Braak NFTs, Aβ deposition, and cortical microinfarcts (CMIs) predicted 30% of Clinical Dementia Rating variability and 49% of the presence of dementia. Braak NFT and CMI thresholds yielded 0.82 sensitivity, 0.91 specificity, and 0.84 correct classification rates for dementia. Using these threshold values, we could distinguish 3 groups of demented cases and propose criteria for neuropathologic definition of mixed dementia, pure vascular dementia, and AD in very old age. Braak NFT staging and severity of CMI allow for defining most of demented cases in the oldest-old. Most importantly, single cutoff scores for these variables that could be used in the future to formulate neuropathologic criteria for mixed dementia in this age group were identified.
Resumo:
OBJECTIVE: To comprehensively assess pre-, intra-, and postoperative delirium risk factors as potential targets for intervention. BACKGROUND: Delirium after cardiac surgery is associated with longer intensive care unit (ICU) stay, and poorer functional and cognitive outcomes. Reports on delirium risk factors so far did not cover the full range of patients' presurgical conditions, intraoperative factors, and postoperative course. METHODS: After written informed consent, 221 consecutive patients ≥ 50 years scheduled for cardiac surgery were assessed for preoperative cognitive performance, and functional and physical status. Clinical and biochemical data were systematically recorded perioperatively. RESULTS: Of the 215 patients remaining for analysis, 31% developed delirium in the intensive care unit. Using logistic regression models, older age [73.3 (71.2-75.4) vs 68.5 (67.0-70.0); P = 0.016], higher Charlson's comorbidity index [3.0 (1.5-4.0) vs 2.0 (1.0-3.0) points; P = 0.009], lower Mini-Mental State Examination (MMSE) score (MMSE, [27 (23-29) vs 28 (27-30) points; P = 0.021], length of cardiopulmonary bypass (CPB) [CPB; 133 (112-163) vs 119 (99-143) min; P = 0.004], and systemic inflammatory response syndrome in the intensive care unit [25 (36.2%) vs 13 (8.9%); P = 0.001] were independently associated with delirium. Combining age, MMSE score, Charlson's comorbidity index, and length of CPB in a regression equation allowed for a prediction of postoperative delirium with a sensitivity of 71.19% and a specificity of 76.26% (receiver operating analysis, area under the curve: 0.791; 95% confidence interval: 0.727-0.845). CONCLUSIONS: Further research will evaluate if modification of these risk factors prevents delirium and improves outcomes.
Resumo:
ECG criteria for left ventricular hypertrophy (LVH) have been almost exclusively elaborated and calibrated in white populations. Because several interethnic differences in ECG characteristics have been found, the applicability of these criteria to African individuals remains to be demonstrated. We therefore investigated the performance of classic ECG criteria for LVH detection in an African population. Digitized 12-lead ECG tracings were obtained from 334 African individuals randomly selected from the general population of the Republic of Seychelles (Indian Ocean). Left ventricular mass was calculated with M-mode echocardiography and indexed to body height. LVH was defined by taking the 95th percentile of body height-indexed LVM values in a reference subgroup. In the entire study sample, 16 men and 15 women (prevalence 9.3%) were finally declared to have LVH, of whom 9 were of the reference subgroup. Sensitivity, specificity, accuracy, and positive and negative predictive values for LVH were calculated for 9 classic ECG criteria, and receiver operating characteristic curves were computed. We also generated a new composite time-voltage criterion with stepwise multiple linear regression: weighted time-voltage criterion=(0.2366R(aVL)+0.0551R(V5)+0.0785S(V3)+ 0.2993T(V1))xQRS duration. The Sokolow-Lyon criterion reached the highest sensitivity (61%) and the R(aVL) voltage criterion reached the highest specificity (97%) when evaluated at their traditional partition value. However, at a fixed specificity of 95%, the sensitivity of these 10 criteria ranged from 16% to 32%. Best accuracy was obtained with the R(aVL) voltage criterion and the new composite time-voltage criterion (89% for both). Positive and negative predictive values varied considerably depending on the concomitant presence of 3 clinical risk factors for LVH (hypertension, age >/=50 years, overweight). Median positive and negative predictive values of the 10 ECG criteria were 15% and 95%, respectively, for subjects with none or 1 of these risk factors compared with 63% and 76% for subjects with all of them. In conclusion, the performance of classic ECG criteria for LVH detection was largely disparate and appeared to be lower in this population of East African origin than in white subjects. A newly generated composite time-voltage criterion might provide improved performance. The predictive value of ECG criteria for LVH was considerably enhanced with the integration of information on concomitant clinical risk factors for LVH.
Resumo:
PURPOSE: Early-onset sepsis (EOS) is one of the main causes for the admission of newborns to the neonatal intensive care unit. However, traditional infection markers are poor diagnostic markers of EOS. Pancreatic stone protein (PSP) is a promising sepsis marker in adults. The aim of this study was to investigate whether determining PSP improves the diagnosis of EOS in comparison with other infection markers. METHODS: This was a prospective multicentre study involving 137 infants with a gestational age of >34 weeks who were admitted with suspected EOS. PSP, procalcitonin (PCT), soluble human triggering receptor expressed on myeloid cells-1 (sTREM-1), macrophage migration inhibitory factor (MIF) and C-reactive protein (CRP) were measured at admission. Receiver-operating characteristic (ROC) curve analysis was performed. RESULTS: The level of PSP in infected infants was significantly higher than that in uninfected ones (median 11.3 vs. 7.5 ng/ml, respectively; p = 0.001). The ROC area under the curve was 0.69 [95 % confidence interval (CI) 0.59-0.80; p < 0.001] for PSP, 0.77 (95 % CI 0.66-0.87; p < 0.001) for PCT, 0.66 (95 % CI 0.55-0.77; p = 0.006) for CRP, 0.62 (0.51-0.73; p = 0.055) for sTREM-1 and 0.54 (0.41-0.67; p = 0.54) for MIF. PSP independently of PCT predicted EOS (p < 0.001), and the use of both markers concomitantly significantly increased the ability to diagnose EOS. A bioscore combining PSP (>9 ng/ml) and PCT (>2 ng/ml) was the best predictor of EOS (0.83; 95 % CI 0.74-0.93; p < 0.001) and resulted in a negative predictive value of 100 % and a positive predictive value of 71 %. CONCLUSIONS: In this prospective study, the diagnostic performance of PSP and PCT was superior to that of traditional markers and a combination bioscore improved the diagnosis of sepsis. Our findings suggest that PSP is a valuable biomarker in combination with PCT in EOS.
Resumo:
RATIONALE AND OBJECTIVES: To systematically review and meta-analyze published data about the diagnostic accuracy of fluorine-18-fluorodeoxyglucose ((18)F-FDG) positron emission tomography (PET) and PET/computed tomography (CT) in the differential diagnosis between malignant and benign pleural lesions. METHODS AND MATERIALS: A comprehensive literature search of studies published through June 2013 regarding the diagnostic performance of (18)F-FDG-PET and PET/CT in the differential diagnosis of pleural lesions was carried out. All retrieved studies were reviewed and qualitatively analyzed. Pooled sensitivity, specificity, positive and negative likelihood ratio (LR+ and LR-) and diagnostic odds ratio (DOR) of (18)F-FDG-PET or PET/CT in the differential diagnosis of pleural lesions on a per-patient-based analysis were calculated. The area under the summary receiver operating characteristic curve (AUC) was calculated to measure the accuracy of these methods. Subanalyses considering device used (PET or PET/CT) were performed. RESULTS: Sixteen studies including 745 patients were included in the systematic review. The meta-analysis of 11 selected studies provided the following results: sensitivity 95% (95% confidence interval [95%CI]: 92-97%), specificity 82% (95%CI: 76-88%), LR+ 5.3 (95%CI: 2.4-11.8), LR- 0.09 (95%CI: 0.05-0.14), DOR 74 (95%CI: 34-161). The AUC was 0.95. No significant improvement of the diagnostic accuracy considering PET/CT studies only was found. CONCLUSIONS: (18)F-FDG-PET and PET/CT demonstrated to be accurate diagnostic imaging methods in the differential diagnosis between malignant and benign pleural lesions; nevertheless, possible sources of false-negative and false-positive results should be kept in mind.
Resumo:
PURPOSE: Currently, many pre-conditions are regarded as relative or absolute contraindications for lumbar total disc replacement (TDR). Radiculopathy is one among them. In Switzerland it is left to the surgeon's discretion when to operate if he adheres to a list of pre-defined indications. Contraindications, however, are less clearly specified. We hypothesized that, the extent of pre-operative radiculopathy results in different benefits for patients treated with mono-segmental lumbar TDR. We used patient perceived leg pain and its correlation with physician recorded radiculopathy for creating the patient groups to be compared. METHODS: The present study is based on the dataset of SWISSspine, a government mandated health technology assessment registry. Between March 2005 and April 2009, 577 patients underwent either mono- or bi-segmental lumbar TDR, which was documented in a prospective observational multicenter mode. A total of 416 cases with a mono-segmental procedure were included in the study. The data collection consisted of pre-operative and follow-up data (physician based) and clinical outcomes (NASS form, EQ-5D). A receiver operating characteristic (ROC) analysis was conducted with patients' self-indicated leg pain and the surgeon-based diagnosis "radiculopathy", as marked on the case report forms. As a result, patients were divided into two groups according to the severity of leg pain. The two groups were compared with regard to the pre-operative patient characteristics and pre- and post-operative pain on Visual Analogue Scale (VAS) and quality of life using general linear modeling. RESULTS: The optimal ROC model revealed a leg pain threshold of 40 ≤ VAS > 40 for the absence or the presence of "radiculopathy". Demographics in the resulting two groups were well comparable. Applying this threshold, the mean pre-operative leg pain level was 16.5 points in group 1 and 68.1 points in group 2 (p < 0.001). Back pain levels differed less with 63.6 points in group 1 and 72.6 in group 2 (p < 0.001). Pre-operative quality of life showed considerable differences with an 0.44 EQ-5D score in group 1 and 0.29 in group 2 (p < 0.001, possible score range -0.6 to 1). At a mean follow-up time of 8 months, group 1 showed a mean leg pain improvement of 3.6 points and group 2 of 41.1 points (p < 0.001). Back pain relief was 35.6 and 39.1 points, respectively (p = 0.27). EQ-5D score improvement was 0.27 in group 1 and 0.41 in group 2 (p = 0.11). CONCLUSIONS: Patients labeled as having radiculopathy (group 2) do mostly have pre-operative leg pain levels ≥ 40. Applying this threshold, the patients with pre-operative leg pain do also have more severe back pain and a considerably lower quality of life. Their net benefit from the lumbar TDR is higher and they do have similar post-operative back and leg pain levels as well as the quality of life as patients without pre-operative leg pain. Although randomized controlled trials are required to confirm these findings, they put leg pain and radiculopathy into perspective as absolute contraindications for TDR.
Resumo:
BACKGROUND: The diagnosis of hypertension in children is difficult because of the multiple sex-, age-, and height-specific thresholds to define elevated blood pressure (BP). Blood pressure-to-height ratio (BPHR) has been proposed to facilitate the identification of elevated BP in children. OBJECTIVE: We assessed the performance of BPHR at a single screening visit to identify children with hypertension that is sustained elevated BP. METHOD: In a school-based study conducted in Switzerland, BP was measured at up to three visits in 5207 children. Children had hypertension if BP was elevated at the three visits. Sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV) for the identification of hypertension were assessed for different thresholds of BPHR. The ability of BPHR at a single screening visit to discriminate children with and without hypertension was evaluated with receiver operating characteristic (ROC) curve analyses. RESULTS: The prevalence of systolic/diastolic hypertension was 2.2%. Systolic BPHR had a better performance to identify hypertension compared with diastolic BPHR (area under the ROC curve: 0.95 vs. 0.84). The highest performance was obtained with a systolic BPHR threshold set at 0.80 mmHg/cm (sensitivity: 98%; specificity: 85%; PPV: 12%; and NPV: 100%) and a diastolic BPHR threshold set at 0.45 mmHg/cm (sensitivity: 79%; specificity: 70%; PPV: 5%; and NPV: 99%). The PPV was higher among tall or overweight children. CONCLUSION: BPHR at a single screening visit had a high performance to identify hypertension in children, although the low prevalence of hypertension led to a low PPV.