854 resultados para Clinical population
Resumo:
BACKGROUND Sacral neuromodulation has become a well-established and widely accepted treatment for refractory non-neurogenic lower urinary tract dysfunction, but its value in patients with a neurological cause is unclear. Although there is evidence indicating that sacral neuromodulation may be effective and safe for treating neurogenic lower urinary tract dysfunction, the number of investigated patients is low and there is a lack of randomized controlled trials. METHODS AND DESIGN This study is a prospective, randomized, placebo-controlled, double-blind multicenter trial including 4 sacral neuromodulation referral centers in Switzerland. Patients with refractory neurogenic lower urinary tract dysfunction are enrolled. After minimally invasive bilateral tined lead placement into the sacral foramina S3 and/or S4, patients undergo prolonged sacral neuromodulation testing for 3-6 weeks. In case of successful (defined as improvement of at least 50% in key bladder diary variables (i.e. number of voids and/or number of leakages, post void residual) compared to baseline values) prolonged sacral neuromodulation testing, the neuromodulator is implanted in the upper buttock. After a 2 months post-implantation phase when the neuromodulator is turned ON to optimize the effectiveness of neuromodulation using sub-sensory threshold stimulation, the patients are randomized in a 1:1 allocation in sacral neuromodulation ON or OFF. At the end of the 2 months double-blind sacral neuromodulation phase, the patients have a neuro-urological re-evaluation, unblinding takes place, and the neuromodulator is turned ON in all patients. The primary outcome measure is success of sacral neuromodulation, secondary outcome measures are adverse events, urodynamic parameters, questionnaires, and costs of sacral neuromodulation. DISCUSSION It is of utmost importance to know whether the minimally invasive and completely reversible sacral neuromodulation would be a valuable treatment option for patients with refractory neurogenic lower urinary tract dysfunction. If this type of treatment is effective in the neurological population, it would revolutionize the management of neurogenic lower urinary tract dysfunction. TRIAL REGISTRATION TRIAL REGISTRATION NUMBER http://www.clinicaltrials.gov; Identifier: NCT02165774.
Resumo:
OBJECTIVE The aim of this study was to explore the risk of incident gout in patients with type 2 diabetes mellitus (T2DM) in association with diabetes duration, diabetes severity and antidiabetic drug treatment. METHODS We conducted a case-control study in patients with T2DM using the UK-based Clinical Practice Research Datalink (CPRD). We identified case patients aged ≥18 years with an incident diagnosis of gout between 1990 and 2012. We matched to each case patient one gout-free control patient. We used conditional logistic regression analysis to calculate adjusted ORs (adj. ORs) with 95% CIs and adjusted our analyses for important potential confounders. RESULTS The study encompassed 7536 T2DM cases with a first-time diagnosis of gout. Compared to a diabetes duration <1 year, prolonged diabetes duration (1-3, 3-6, 7-9 and ≥10 years) was associated with decreased adj. ORs of 0.91 (95% CI 0.79 to 1.04), 0.76 (95% CI 0.67 to 0.86), 0.70 (95% CI 0.61 to 0.86), and 0.58 (95% CI 0.51 to 0.66), respectively. Compared to a reference A1C level of <7%, the risk estimates of increasing A1C levels (7.0-7.9, 8.0-8.9 and ≥9%) steadily decreased with adj. ORs of 0.79 (95% CI 0.72 to 0.86), 0.63 (95% CI 0.55 to 0.72), and 0.46 (95% CI 0.40 to 0.53), respectively. Neither use of insulin, metformin, nor sulfonylureas was associated with an altered risk of incident gout. CONCLUSIONS Increased A1C levels, but not use of antidiabetic drugs, was associated with a decreased risk of incident gout among patients with T2DM.
Resumo:
BACKGROUND Urinary creatinine excretion is used as a marker of completeness of timed urine collections, which are a keystone of several metabolic evaluations in clinical investigations and epidemiological surveys. The current reference values for 24-hour urinary creatinine excretion rely on observations performed in the 1960s and 1970s in relatively small and mostly selected groups, and may thus poorly fit to the present-day general European population. The aim of this study was to establish and validate anthropometry-based age- and sex-specific reference values of the 24-hour urinary creatinine excretion on adult populations with preserved renal function. METHODS We used data from two independent Swiss cross-sectional population-based studies with standardised 24-hour urinary collection and measured anthropometric variables. Only data from adults of European descent, with estimated glomerular filtration rate (eGFR) ≥60 ml/min/1.73 m(2) and reported completeness of the urinary collection were retained. A linear regression model was developed to predict centiles of the 24-hour urinary creatinine excretion in 1,137 participants from the Swiss Survey on Salt and validated in 994 participants from the Swiss Kidney Project on Genes in Hypertension. RESULTS The mean urinary creatinine excretion was 193 ± 41 μmol/kg/24 hours in men and 151 ± 38 μmol/kg/24 hours in women in the Swiss Survey on Salt. The values were inversely correlated with age and body mass index (BMI). Based on current reference values (177 to 221 μmol/kg/24 hours in men and 133 to 177 μmol/kg/24 hours in women), 56% of the urinary collections in the whole population and 67% in people >60 years old would have been considered as inaccurate. A linear regression model with sex, BMI and age as predictor variables was found to provide the best prediction of the observed values and showed a good fit when applied to the validation population. CONCLUSIONS We propose a validated prediction equation for 24-hour urinary creatinine excretion in the general European population, based on readily available variables such as age, sex and BMI, and a few derived normograms to ease its clinical application. This should help healthcare providers to interpret the completeness of a 24-hour urine collection in daily clinical practice and in epidemiological population studies.
Resumo:
BACKGROUND CONTEXT The nerve root sedimentation sign in transverse magnetic resonance imaging has been shown to discriminate well between selected patients with and without lumbar spinal stenosis (LSS), but the performance of this new test, when used in a broader patient population, is not yet known. PURPOSE To evaluate the clinical performance of the nerve root sedimentation sign in detecting central LSS above L5 and to determine its potential significance for treatment decisions. STUDY DESIGN Retrospective cohort study. PATIENT SAMPLE One hundred eighteen consecutive patients with suspected LSS (52% women, median age 62 years) with a median follow-up of 24 months. OUTCOME MEASURES Oswestry disability index (ODI) and back and leg pain relief. METHODS We performed a clinical test validation study to assess the clinical performance of the sign by measuring its association with health outcomes. Subjects were patients referred to our orthopedic spine unit from 2004 to 2007 before the sign had been described. Based on clinical and radiological diagnostics, patients had been treated with decompression surgery or nonsurgical treatment. Changes in the ODI and pain from baseline to 24-month follow-up were compared between sedimentation sign positives and negatives in both treatment groups. RESULTS Sixty-nine patients underwent surgery. Average baseline ODI in the surgical group was 54.7%, and the sign was positive in 39 patients (mean ODI improvement 29.0 points) and negative in 30 (ODI improvement 28.4), with no statistically significant difference in ODI and pain improvement between groups. In the 49 patients of the nonsurgical group, mean baseline ODI was 42.4%; the sign was positive in 18 (ODI improvement 0.6) and negative in 31 (ODI improvement 17.7). A positive sign was associated with a smaller ODI and back pain improvement than negative signs (both p<.01 on t test). CONCLUSIONS In patients commonly treated with decompression surgery, the sedimentation sign does not appear to predict surgical outcome. In nonsurgically treated patients, a positive sign is associated with more limited improvement. In these cases, surgery might be effective, but this needs investigation in prospective randomized trials (Australian New Zealand Clinical Trial Registry, number ACTRN12610000567022).
Resumo:
BACKGROUND: Clinical disorders often share common symptoms and aetiological factors. Bifactor models acknowledge the role of an underlying general distress component and more specific sub-domains of psychopathology which specify the unique components of disorders over and above a general factor. METHODS: A bifactor model jointly calibrated data on subjective distress from The Mood and Feelings Questionnaire and the Revised Children's Manifest Anxiety Scale. The bifactor model encompassed a general distress factor, and specific factors for (a) hopelessness-suicidal ideation, (b) generalised worrying and (c) restlessness-fatigue at age 14 which were related to lifetime clinical diagnoses established by interviews at ages 14 (concurrent validity) and current diagnoses at 17 years (predictive validity) in a British population sample of 1159 adolescents. RESULTS: Diagnostic interviews confirmed the validity of a symptom-level bifactor model. The underlying general distress factor was a powerful but non-specific predictor of affective, anxiety and behaviour disorders. The specific factors for hopelessness-suicidal ideation and generalised worrying contributed to predictive specificity. Hopelessness-suicidal ideation predicted concurrent and future affective disorder; generalised worrying predicted concurrent and future anxiety, specifically concurrent generalised anxiety disorders. Generalised worrying was negatively associated with behaviour disorders. LIMITATIONS: The analyses of gender differences and the prediction of specific disorders was limited due to a low frequency of disorders other than depression. CONCLUSIONS: The bifactor model was able to differentiate concurrent and predict future clinical diagnoses. This can inform the development of targeted as well as non-specific interventions for prevention and treatment of different disorders.
Resumo:
OBJECTIVES The intensity of post-egg retrieval pain is underestimated, with few studies examining post-procedural pain and predictors to identify women at risk for severe pain. We evaluated the influence of pre-procedural hormonal levels, ovarian factors, as well as mechanical temporal summation (mTS) as predictors for post-egg retrieval pain in women undergoing in vitro fertilization (IVF). METHODS Eighteen women scheduled for ultrasound-guided egg retrieval under standardized anesthesia and post-procedural analgesia were enrolled. Pre-procedural mTS, questionnaires, clinical data related to anesthesia and the procedure itself, post-procedural pain scores and pain medication for breakthrough pain were recorded. Statistical analysis included Pearson product moment correlations, Mann-Whitney U tests and multiple linear regressions. RESULTS Average peak post-egg retrieval pain during the first 24 hours was 5.0±1.6 on an NRS scale (0=no pain, 10=worst pain imaginable). Peak post-egg retrieval pain was correlated with basal antimullerian hormone (AMH) (r=0.549, P=0.018), pre-procedural peak estradiol (r=0.582, P=0.011), total number of follicles (r=0.517, P=0.028) and number of retrieved eggs (r=0.510, P=0.031). Ovarian hyperstimulation syndrome (OHSS) (n=4) was associated with higher basal AMH (P=0.004), higher peak pain scores (P=0.049), but not with peak estradiol (P=0.13). The mTS did not correlate with peak post-procedural pain (r=0.266, P=0.286), or peak estradiol level (r=0.090, P=0.899). DISCUSSION Peak post-egg retrieval pain intensity was higher than anticipated. Our results suggest that post-egg retrieval pain can be predicted by baseline AMH, high peak estradiol, and OHSS. Further studies to evaluate intra- and post-procedural pain in this population are needed, as well as clinical trials to assess post-procedural analgesia in women presenting with high hormonal levels.
Resumo:
BACKGROUND Ultrathin strut biodegradable polymer sirolimus-eluting stents (BP-SES) proved noninferior to durable polymer everolimus-eluting stents (DP-EES) for a composite clinical end point in a population with minimal exclusion criteria. We performed a prespecified subgroup analysis of the Ultrathin Strut Biodegradable Polymer Sirolimus-Eluting Stent Versus Durable Polymer Everolimus-Eluting Stent for Percutaneous Coronary Revascularisation (BIOSCIENCE) trial to compare the performance of BP-SES and DP-EES in patients with diabetes mellitus. METHODS AND RESULTS BIOSCIENCE trial was an investigator-initiated, single-blind, multicentre, randomized, noninferiority trial comparing BP-SES versus DP-EES. The primary end point, target lesion failure, was a composite of cardiac death, target-vessel myocardial infarction, and clinically indicated target lesion revascularization within 12 months. Among a total of 2119 patients enrolled between February 2012 and May 2013, 486 (22.9%) had diabetes mellitus. Overall diabetic patients experienced a significantly higher risk of target lesion failure compared with patients without diabetes mellitus (10.1% versus 5.7%; hazard ratio [HR], 1.80; 95% confidence interval [CI], 1.27-2.56; P=0.001). At 1 year, there were no differences between BP-SES versus DP-EES in terms of the primary end point in both diabetic (10.9% versus 9.3%; HR, 1.19; 95% CI, 0.67-2.10; P=0.56) and nondiabetic patients (5.3% versus 6.0%; HR, 0.88; 95% CI, 0.58-1.33; P=0.55). Similarly, no significant differences in the risk of definite or probable stent thrombosis were recorded according to treatment arm in both study groups (4.0% versus 3.1%; HR, 1.30; 95% CI, 0.49-3.41; P=0.60 for diabetic patients and 2.4% versus 3.4%; HR, 0.70; 95% CI, 0.39-1.25; P=0.23, in nondiabetics). CONCLUSIONS In the prespecified subgroup analysis of the BIOSCIENCE trial, clinical outcomes among diabetic patients treated with BP-SES or DP-EES were comparable at 1 year. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifier: NCT01443104.
Resumo:
REASONS FOR PERFORMING STUDY: The diagnosis of equine back disorders is challenging. Objectively determining movement of the vertebral column may therefore be of value in a clinical setting. OBJECTIVES: To establish whether surface-mounted inertial measurement units (IMUs) can be used to establish normal values for range of motion (ROM) of the vertebral column in a uniform population of horses trotting under different conditions. STUDY DESIGN: Vertebral ROM was established in Franches-Montagnes stallions and a general population of horses and the variability in measurements compared between the two groups. Repeatability and the influence of specific exercise condition (on ROM) were assessed. Finally, attempts were made to explain the findings of the study through the evaluation of factors that might influence ROM. METHODS: Dorsoventral (DV) and mediolateral (ML) vertebral ROM was measured at a trot under different exercise conditions in 27 Franches-Montagnes stallions and six general population horses using IMUs distributed over the vertebral column. RESULTS: Variability in the ROM measurements was significantly higher for general population horses than for Franches-Montagnes stallions (both DV and ML ROM). Repeatability was strong to very strong for DV measurements and moderate for ML measurements. Trotting under saddle significantly reduced the ROM, with sitting trot resulting in a significantly lower ROM than rising trot. Age is unlikely to explain the low variability in vertebral ROM recorded in the Franches-Montagnes horses, while this may be associated with conformational factors. CONCLUSIONS: It was possible to establish a normal vertebral ROM for a group of Franches-Montagnes stallions. While within-breed variation was low in this population, further studies are necessary to determine variation in vertebral ROM for other breeds and to assess their utility for diagnosis of equine back disorders.
Resumo:
BACKGROUND AND OBJECTIVES Reliability is an essential condition for using quantitative sensory tests (QSTs) in research and clinical practice, but information on reliability in patients with chronic pain is sparse. The aim of this study was to evaluate the reliability of different QST in patients with chronic low back pain. METHODS Eighty-nine patients with chronic low back pain participated in 2 identical experimental sessions, separated by at least 7 days. The following parameters were recorded: pressure pain detection and tolerance thresholds at the toe, electrical pain thresholds to single and repeated stimulation, heat pain detection and tolerance thresholds at the arm and leg, cold pain detection threshold at the arm and leg, and conditioned pain modulation using the cold pressor test.Reliability was analyzed using the coefficient of variation, the coefficient of repeatability, and the intraclass correlation coefficient. It was judged as acceptable or not based primarily on the analysis of the coefficient of repeatability. RESULTS The reliability of most tests was acceptable. Exceptions were cold pain detection thresholds at the leg and arm. CONCLUSIONS Most QST measurements have acceptable reliability in patients with chronic low back pain.
Resumo:
OBJECTIVE To validate a radioimmunoassay for measurement of procollagen type III amino terminal propeptide (PIIINP) concentrations in canine serum and bronchoalveolar lavage fluid (BALF) and investigate the effects of physiologic and pathologic conditions on PIIINP concentrations. SAMPLE POPULATION Sera from healthy adult (n = 70) and growing dogs (20) and dogs with chronic renal failure (CRF; 10), cardiomyopathy (CMP; 12), or degenerative valve disease (DVD; 26); and sera and BALF from dogs with chronic bronchopneumopathy (CBP; 15) and healthy control dogs (10 growing and 9 adult dogs). PROCEDURE A radioimmunoassay was validated, and a reference range for serum PIIINP (S-PIIINP) concentration was established. Effects of growth, age, sex, weight, CRF, and heart failure on S-PIIINP concentration were analyzed. In CBP-affected dogs, S-PIIINP and BALF-PIIINP concentrations were evaluated. RESULTS The radioimmunoassay had good sensitivity, linearity, precision, and reproducibility and reasonable accuracy for measurement of S-PIIINP and BALF-PIIINP concentrations. The S-PIIINP concentration reference range in adult dogs was 8.86 to 11.48 mug/L. Serum PIIINP concentration correlated with weight and age. Growing dogs had significantly higher S-PIIINP concentrations than adults, but concentrations in CRF-, CMP-, DVD-, or CBP-affected dogs were not significantly different from control values. Mean BALF-PIIINP concentration was significantly higher in CBP-affected dogs than in healthy adults. CONCLUSIONS AND CLINICAL RELEVANCE In dogs, renal or cardiac disease or CBP did not significantly affect S-PIIINP concentration; dogs with CBP had high BALF-PIIINP concentrations. Data suggest that the use of PIIINP as a marker of pathologic fibrosis might be limited in growing dogs.
Resumo:
Background: The individual risk of developing psychosis after being tested for clinical high-risk (CHR) criteria (posttest risk of psychosis) depends on the underlying risk of the disease of the population from which the person is selected (pretest risk of psychosis), and thus on recruitment strategies. Yet, the impact of recruitment strategies on pretest risk of psychosis is unknown. Methods: Meta-analysis of the pretest risk of psychosis in help-seeking patients selected to undergo CHR assessment: total transitions to psychosis over the pool of patients assessed for potential risk and deemed at risk (CHR+) or not at risk (CHR−). Recruitment strategies (number of outreach activities per study, main target of outreach campaign, and proportion of self-referrals) were the moderators examined in meta-regressions. Results: 11 independent studies met the inclusion criteria, for a total of 2519 (CHR+: n = 1359; CHR−: n = 1160) help-seeking patients undergoing CHR assessment (mean follow-up: 38 months). The overall meta-analytical pretest risk for psychosis in help-seeking patients was 15%, with high heterogeneity (95% CI: 9%–24%, I 2 = 96, P < .001). Recruitment strategies were heterogeneous and opportunistic. Heterogeneity was largely explained by intensive (n = 11, β = −.166, Q = 9.441, P = .002) outreach campaigns primarily targeting the general public (n = 11, β = −1.15, Q = 21.35, P < .001) along with higher proportions of self-referrals (n = 10, β = −.029, Q = 4.262, P = .039), which diluted pretest risk for psychosis in patients undergoing CHR assessment. Conclusions: There is meta-analytical evidence for overall risk enrichment (pretest risk for psychosis at 38monhts = 15%) in help-seeking samples selected for CHR assessment as compared to the general population (pretest risk of psychosis at 38monhts=0.1%). Intensive outreach campaigns predominantly targeting the general population and a higher proportion of self-referrals diluted the pretest risk for psychosis.
Resumo:
BACKGROUND Hereditary thrombotic thrombocytopenic purpura (TTP) caused by ADAMTS13 mutations is a rare, but serious condition. The prevalence is unknown, but seems to be high in Norway. OBJECTIVES To identify all patients with hereditary TTP in Central Norway and to investigate the prevalence of hereditary TTP and the population frequencies of two common ADAMTS13 mutations. Patients/Methods Patients were identified in a cross-sectional study within Central Norway Health Region by means of three different search strategies. Frequencies of ADAMTS13 mutations, c.4143_4144dupA and c.3178 C>T (p.R1060W) were investigated in a population-based cohort (500 alleles) and in healthy blood donors (2104 alleles) by taking advantage of the close neighbourhood of the ADAMTS13 and ABO blood group gene loci. The observed prevalence of hereditary TTP was compared to the rates of ADAMTS13 mutation carriers in different geographical regions. RESULTS We identified 11 families with hereditary TTP in Central Norway during the 10-year study period. The prevalence of hereditary TTP in Central Norway was 16.7 x 10(-6) . The most prevalent mutation was c.4143_4144dupA, accounting for two thirds of disease causing alleles among patients and having an allelic frequency of 0.33% in the Central, 0.10% in the Western, and 0.04% in the Southeastern Norwegian population. The allelic frequency of c.3178 C>T (p.R1060W) in the population was even higher (0.3-1%), but this mutation was infrequent among patients, with no homozygous cases. CONCLUSIONS We found a high prevalence of hereditary TTP in Central Norway and an apparently different penetrance of ADAMTS13 mutations. This article is protected by copyright. All rights reserved.
Resumo:
Objective In the pediatric population traumatic injuries of the upper extremity are common. After therapy a decision has to be made if the mobility of the joint lies within a normal range. The purpose of this study was to give an introduction to normative data. We investigate if there is a significant difference in the range of motion (ROM) between male and female probands and furthermore, if an effect of the age can be detected. Methods We performed an institutional review board-approved study of healthy girls and boys aged between 2 and 16 years without any medical history of an upper extremity fracture. We investigated the active ROM of the elbow, wrist, metacarpophalangeal, and interphalangeal joints. Furthermore, age, handedness, weight, and height were recorded. A total of 171 adolescents with a mean age of 10.6 years were included and separated into four cohorts by age: 2 to 5, 6 to 10, 11 to 13, and 14 to 16 years. Results We found significant differences between the genders in the age group from 11 to 13 years for the flexion of the elbow, the pronation, the flexion of the interphalangeal joint of the thumb, as well as the flexion of the metacarpophalangeal joints of digitus II to V. Furthermore, a significant difference in the same joints except from the elbow flexion could be demonstrated between the genders. Conclusion Our study contributes normative data for upper extremity ROM in the pediatric population and presents a gender-related difference in certain joints. Clinical Relevance Normative data for the ROM of upper extremity joints in children is helpful for the evaluation of pediatric orthopedic patients and provides the framework for therapeutic resolution. Since a great number of traumatic injuries in children affect the upper extremity, this information may help the physician to estimate the impact of the injury and decide on the therapeutic management.