47 resultados para cognitive diagnostic model
Resumo:
OBJECTIVES Cognitive fluctuation (CF) is a common feature of dementia and a core diagnostic symptom for dementia with Lewy bodies (DLB). CF remains difficult to accurately and reliably detect clinically. This study aimed to develop a psychometric test that could be used by clinicians to facilitate the identification of CF and improve the recognition and diagnosis of DLB and Parkinson disease, and to improve differential diagnosis of other dementias. METHODS We compiled a 17-item psychometric test for identifying CF and applied this measure in a cross-sectional design. Participants were recruited from the North East of England, and assessments were made in individuals' homes. We recruited people with four subtypes of dementia and a healthy comparison group, and all subjects were administered this pilot scale together with other standard ratings. The psychometric properties of the scale were examined with exploratory factor analysis. We also examined the ability of individual items to identify CF to discriminate between dementia subtypes. The sensitivity and specificity of discriminating items were explored along with validity and reliability analyses. RESULTS Participants comprised 32 comparison subjects, 30 people with Alzheimer disease, 30 with vascular dementia, 29 with DLB, and 32 with dementia associated with Parkinson disease. Four items significantly discriminated between dementia groups and showed good levels of sensitivity (range: 78.6%-80.3%) and specificity (range: 73.9%-79.3%). The scale had very good levels of test-retest (Cronbach's alpha: 0.82) and interrater (0.81) reliabilities. The four items loaded onto three different factors. These items were: 1) marked differences in functioning during the daytime; 2) daytime somnolence; 3) daytime drowsiness; and 4) altered levels of consciousness during the day. CONCLUSIONS We identified four items that provide valid, sensitive, and specific questions for reliably identifying CF and distinguishing the Lewy body dementias from other major causes of dementia (Alzheimer disease and vascular dementia).
Resumo:
Although employees are encouraged to take exercise after work to keep physically fit, they should not suffer injury. Some sports injuries that occur after work appear to be work-related and preventable. This study investigated whether cognitive failure mediates the influence of mental work demands and conscientiousness on risk-taking and risky and unaware behaviour during after-work sports activities. Participants were 129 employees (36% female) who regularly took part in team sports after work. A structural equation model showed that work-related cognitive failure significantly mediated the influence of mental work demands on risky behaviour during sports (p < .05) and also mediated the directional link between conscientiousness and risky behaviour during sports (p < .05). A path from risky behaviour during sports to sports injuries in the last four weeks was also significant (p < .05). Performance constraints, time pressure, and task uncertainty are likely to increase cognitive load and thereby boost cognitive failures both during work and sports activities after work. Some sports injuries after work could be prevented by addressing the issue of work redesign.
Resumo:
OBJECTIVE To provide guidance on standards for reporting studies of diagnostic test accuracy for dementia disorders. METHODS An international consensus process on reporting standards in dementia and cognitive impairment (STARDdem) was established, focusing on studies presenting data from which sensitivity and specificity were reported or could be derived. A working group led the initiative through 4 rounds of consensus work, using a modified Delphi process and culminating in a face-to-face consensus meeting in October 2012. The aim of this process was to agree on how best to supplement the generic standards of the STARD statement to enhance their utility and encourage their use in dementia research. RESULTS More than 200 comments were received during the wider consultation rounds. The areas at most risk of inadequate reporting were identified and a set of dementia-specific recommendations to supplement the STARD guidance were developed, including better reporting of patient selection, the reference standard used, avoidance of circularity, and reporting of test-retest reliability. CONCLUSION STARDdem is an implementation of the STARD statement in which the original checklist is elaborated and supplemented with guidance pertinent to studies of cognitive disorders. Its adoption is expected to increase transparency, enable more effective evaluation of diagnostic tests in Alzheimer disease and dementia, contribute to greater adherence to methodologic standards, and advance the development of Alzheimer biomarkers.
Resumo:
Lumbar spinal instability (LSI) is a common spinal disorder and can be associated with substantial disability. The concept of defining clinically relevant classifications of disease or 'target condition' is used in diagnostic research. Applying this concept to LSI we hypothesize that a set of clinical and radiological criteria can be developed to identify patients with this target condition who are at high risk of 'irreversible' decompensated LSI for whom surgery becomes the treatment of choice. In LSI, structural deterioration of the lumbar disc initiates a degenerative cascade of segmental instability. Over time, radiographic signs become visible: traction spurs, facet joint degeneration, misalignment, stenosis, olisthesis and de novo scoliosis. Ligaments, joint capsules, local and distant musculature are the functional elements of the lumbar motion segment. Influenced by non-functional factors, these functional elements allow a compensation of degeneration of the motion segment. Compensation may happen on each step of the degenerative cascade but cannot reverse it. However, compensation of LSI may lead to an alleviation or resolution of clinical symptoms. In return, the target condition of decompensation of LSI may cause the new occurrence of symptoms and pain. Functional compensation and decompensation are subject to numerous factors that can change which makes estimation of an individual's long-term prognosis difficult. Compensation and decompensation may influence radiographic signs of degeneration, e.g. the degree of misalignment and segmental angulation caused by LSI is influenced by the tonus of the local musculature. This conceptual model of compensation/decompensation may help solve the debate on functional and psychosocial factors that influence low back pain and to establish a new definition of non-specific low back pain. Individual differences of identical structural disorders could be explained by compensated or decompensated LSI leading to changes in clinical symptoms and pain. Future spine surgery will have to carefully define and measure functional aspects of LSI, e.g. to identify a point of no return where multidisciplinary interventions do not allow a re-compensation and surgery becomes the treatment of choice.
Resumo:
Although research and clinical interventions for patients with dual disorders have been described since as early as the 1980s, the day-to-day treatment of these patients remains problematic and challenging in many countries. Throughout this book, many approaches and possible pathways have been outlined. Based upon these experiences, some key points can be extracted in order to guide to future developments. (1) New diagnostic approaches are warranted when dealing with patients who have multiple problems, given the limitations of the current categorical systems. (2) Greater emphasis should be placed on secondary prevention and early intervention for children and adolescents at an increased risk of later-life dual disorders. (3) Mental, addiction, and somatic care systems can be integrated, adopting a patient-focused approach to care delivery. (4) Recovery should be taken into consideration when defining treatment intervention and outcome goals. (5) It is important to reduce societal risk factors, such as poverty and early childhood adversity. (6) More resources are needed to provide adequate mental health care in the various countries. The development of European guidance initiatives would provide benefits in many of these areas, making it possible to ensure a more harmonized standard of care for patients with dual disorders.
Resumo:
BACKGROUND: Clinical disorders often share common symptoms and aetiological factors. Bifactor models acknowledge the role of an underlying general distress component and more specific sub-domains of psychopathology which specify the unique components of disorders over and above a general factor. METHODS: A bifactor model jointly calibrated data on subjective distress from The Mood and Feelings Questionnaire and the Revised Children's Manifest Anxiety Scale. The bifactor model encompassed a general distress factor, and specific factors for (a) hopelessness-suicidal ideation, (b) generalised worrying and (c) restlessness-fatigue at age 14 which were related to lifetime clinical diagnoses established by interviews at ages 14 (concurrent validity) and current diagnoses at 17 years (predictive validity) in a British population sample of 1159 adolescents. RESULTS: Diagnostic interviews confirmed the validity of a symptom-level bifactor model. The underlying general distress factor was a powerful but non-specific predictor of affective, anxiety and behaviour disorders. The specific factors for hopelessness-suicidal ideation and generalised worrying contributed to predictive specificity. Hopelessness-suicidal ideation predicted concurrent and future affective disorder; generalised worrying predicted concurrent and future anxiety, specifically concurrent generalised anxiety disorders. Generalised worrying was negatively associated with behaviour disorders. LIMITATIONS: The analyses of gender differences and the prediction of specific disorders was limited due to a low frequency of disorders other than depression. CONCLUSIONS: The bifactor model was able to differentiate concurrent and predict future clinical diagnoses. This can inform the development of targeted as well as non-specific interventions for prevention and treatment of different disorders.
Resumo:
PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.
Resumo:
In the present article, we argue that it may be fruitful to incorporate the ideas of the strength model of self-control into the core assumptions of the well-established attentional control theory (ACT). In ACT, it is assumed that anxiety automatically leads to attention disruption and increased distractibility, which may impair subsequent cognitive or perceptual-motor performance, but only if individuals do not have the ability to counteract this attention disruption. However, ACT does not clarify which process determines whether one can volitionally regulate attention despite experiencing high levels of anxiety. In terms of the strength model of self-control, attention regulation can be viewed as a self-control act depending on the momentary availability of self-control strength. We review literature that has revealed that self-control strength moderates the anxiety-performance relationship, discuss how to integrate these two theoretical models, and offer practical recommendations of how to counteract negative anxiety effects.
Resumo:
Neuropsychologists often face interpretational difficulties when assessing cognitive deficits, particularly in cases of unclear cerebral etiology. How can we be sure whether a single test score below the population average is indicative of a pathological brain condition or normal? In the past few years, the topic of intra-individual performance variability has gained great interest. On the basis of a large normative sample, two measures of performance variability and their importance for neuropsychological interpretation will be presented in this paper: the number of low scores and the level of dispersion.We conclude that low scores are common in healthy individuals. On the other hand, the level of dispersion is relatively small. Here, base rate information about abnormally low scores and abnormally high dispersion across cognitive abilities are providedto improve the awareness of normal variability and to serve clinicians as additional interpretive measures in the diagnostic process.
Resumo:
Scoring schemes for clinical, ultrasonographic and radiographic findings in pigs were developed based upon a standardized animal model for Actinobacillus pleuropneumoniae infection.The results of these methods were compared to each other as well as with the corresponding pathomorphological findings during necropsy. Altogether 69 pigs of different breeding lines (Hampshire, Pietrain and German Landrace were examined. Positive correlations were found between the results of all three methods as well as with the necropsy scores (p <0.0001). Different pathomorphological findings were detected either by radiographic or by ultrasonographic examination dependent upon the type of lung tissue alterations: Alterations of the pleura as well as sequestration of lung tissue on the lung surface could be clearly identified during the ultrasonographic examination while deep tissue alterations with no contact to the lung surface could be detected reliably by radiographic examination. Both methods complement each other, and the application of a combined ultrasonographic and radiographic examination of the thorax allows a comprehensive inspection of the lung condition. Particularly during the acute phase of the disease the extent of lung tissue damage can be estimated more precisely than by clinical examination alone.
Resumo:
Air was sampled from the porous firn layer at the NEEM site in Northern Greenland. We use an ensemble of ten reference tracers of known atmospheric history to characterise the transport properties of the site. By analysing uncertainties in both data and the reference gas atmospheric histories, we can objectively assign weights to each of the gases used for the depth-diffusivity reconstruction. We define an objective root mean square criterion that is minimised in the model tuning procedure. Each tracer constrains the firn profile differently through its unique atmospheric history and free air diffusivity, making our multiple-tracer characterisation method a clear improvement over the commonly used single-tracer tuning. Six firn air transport models are tuned to the NEEM site; all models successfully reproduce the data within a 1σ Gaussian distribution. A comparison between two replicate boreholes drilled 64 m apart shows differences in measured mixing ratio profiles that exceed the experimental error. We find evidence that diffusivity does not vanish completely in the lock-in zone, as is commonly assumed. The ice age- gas age difference (1 age) at the firn-ice transition is calculated to be 182+3−9 yr. We further present the first intercomparison study of firn air models, where we introduce diagnostic scenarios designed to probe specific aspects of the model physics. Our results show that there are major differences in the way the models handle advective transport. Furthermore, diffusive fractionation of isotopes in the firn is poorly constrained by the models, which has consequences for attempts to reconstruct the isotopic composition of trace gases back in time using firn air and ice core records.
Resumo:
OBJECTIVE This study explored whether acute serum marker S100B is related with post-concussive symptoms (PCS) and neuropsychological performance 4 months after paediatric mild traumatic brain injury (mTBI). RESEARCH DESIGN AND METHODS This prospective short-term longitudinal study investigated children (aged 6-16 years) with mTBI (n = 36, 16 males) and children with orthopaedic injuries (OI, n = 27, 18 males) as a control group. S100B in serum was measured during the acute phase and was correlated with parent-rated PCS and neuropsychological performance 4 months after the injury. MAIN OUTCOMES AND RESULTS The results revealed no between-group difference regarding acute S100B serum concentration. In children after mTBI, group-specific significant Spearman correlations were found between S100B and post-acute cognitive PCS (r = 0.54, p = 0.001) as well as S100B and verbal memory performance (r = -0.47, p = 0.006). In children after OI, there were insignificant positive relations between S100B and post-acute somatic PCS. In addition, insignificant positive correlations were found between neuropsychological outcome and S100B in children after OI. CONCLUSIONS S100B was not specific for mild brain injuries and may also be elevated after OI. The group-specific association between S100B and ongoing cognitive PCS in children after mTBI should motivate to examine further the role of S100B as a diagnostic biomarker in paediatric mTBI.
Resumo:
In addition to cognitive decline, individuals affected by Alzheimer's disease (AD) can experience important neuropsychiatric symptoms including sleep disturbances. We characterized the sleep-wake cycle in the TgCRND8 mouse model of AD, which overexpresses a mutant human form of amyloid precursor protein resulting in high levels of β-amyloid and plaque formation by 3 months of age. Polysomnographic recordings in freely-moving mice were conducted to study sleep-wake cycle architecture at 3, 7 and 11 months of age and corresponding levels of β-amyloid in brain regions regulating sleep-wake states were measured. At all ages, TgCRND8 mice showed increased wakefulness and reduced non-rapid eye movement (NREM) sleep during the resting and active phases. Increased wakefulness in TgCRND8 mice was accompanied by a shift in the waking power spectrum towards fast frequency oscillations in the beta (14-20 Hz) and low gamma range (20-50 Hz). Given the phenotype of hyperarousal observed in TgCRND8 mice, the role of noradrenergic transmission in the promotion of arousal, and previous work reporting an early disruption of the noradrenergic system in TgCRND8, we tested the effects of the alpha-1-adrenoreceptor antagonist, prazosin, on sleep-wake patterns in TgCRND8 and non-transgenic (NTg) mice. We found that a lower dose (2 mg/kg) of prazosin increased NREM sleep in NTg but not in TgCRND8 mice, whereas a higher dose (5 mg/kg) increased NREM sleep in both genotypes, suggesting altered sensitivity to noradrenergic blockade in TgCRND8 mice. Collectively our results demonstrate that amyloidosis in TgCRND8 mice is associated with sleep-wake cycle dysfunction, characterized by hyperarousal, validating this model as a tool towards understanding the relationship between β-amyloid overproduction and disrupted sleep-wake patterns in AD.
Resumo:
Immunoassays are essential in the workup of patients with suspected heparin-induced thrombocytopenia. However, the diagnostic accuracy is uncertain with regard to different classes of assays, antibody specificities, thresholds, test variations, and manufacturers. We aimed to assess diagnostic accuracy measures of available immunoassays and to explore sources of heterogeneity. We performed comprehensive literature searches and applied strict inclusion criteria. Finally, 49 publications comprising 128 test evaluations in 15 199 patients were included in the analysis. Methodological quality according to the revised tool for quality assessment of diagnostic accuracy studies was moderate. Diagnostic accuracy measures were calculated with the unified model (comprising a bivariate random-effects model and a hierarchical summary receiver operating characteristics model). Important differences were observed between classes of immunoassays, type of antibody specificity, thresholds, application of confirmation step, and manufacturers. Combination of high sensitivity (>95%) and high specificity (>90%) was found in 5 tests only: polyspecific enzyme-linked immunosorbent assay (ELISA) with intermediate threshold (Genetic Testing Institute, Asserachrom), particle gel immunoassay, lateral flow immunoassay, polyspecific chemiluminescent immunoassay (CLIA) with a high threshold, and immunoglobulin G (IgG)-specific CLIA with low threshold. Borderline results (sensitivity, 99.6%; specificity, 89.9%) were observed for IgG-specific Genetic Testing Institute-ELISA with low threshold. Diagnostic accuracy appears to be inadequate in tests with high thresholds (ELISA; IgG-specific CLIA), combination of IgG specificity and intermediate thresholds (ELISA, CLIA), high-dose heparin confirmation step (ELISA), and particle immunofiltration assay. When making treatment decisions, clinicians should be a aware of diagnostic characteristics of the tests used and it is recommended they estimate posttest probabilities according to likelihood ratios as well as pretest probabilities using clinical scoring tools.
Resumo:
Symptoms of primary ciliary dyskinesia (PCD) are nonspecific and guidance on whom to refer for testing is limited. Diagnostic tests for PCD are highly specialised, requiring expensive equipment and experienced PCD scientists. This study aims to develop a practical clinical diagnostic tool to identify patients requiring testing.Patients consecutively referred for testing were studied. Information readily obtained from patient history was correlated with diagnostic outcome. Using logistic regression, the predictive performance of the best model was tested by receiver operating characteristic curve analyses. The model was simplified into a practical tool (PICADAR) and externally validated in a second diagnostic centre.Of 641 referrals with a definitive diagnostic outcome, 75 (12%) were positive. PICADAR applies to patients with persistent wet cough and has seven predictive parameters: full-term gestation, neonatal chest symptoms, neonatal intensive care admittance, chronic rhinitis, ear symptoms, situs inversus and congenital cardiac defect. Sensitivity and specificity of the tool were 0.90 and 0.75 for a cut-off score of 5 points. Area under the curve for the internally and externally validated tool was 0.91 and 0.87, respectively.PICADAR represents a simple diagnostic clinical prediction rule with good accuracy and validity, ready for testing in respiratory centres referring to PCD centres.